Security Audit
autoclaw-cc/xiaohongshu-skills:skills/xhs-auth
github.com/autoclaw-cc/xiaohongshu-skillsTrust Assessment
autoclaw-cc/xiaohongshu-skills:skills/xhs-auth received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 7 findings: 6 critical, 0 high, 1 medium, and 0 low severity. Key findings include Command Injection via User Input in CLI Arguments, Reliance on Unspecified External Script (Supply Chain Risk).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on March 11, 2026 (commit c26fa986). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via User Input in CLI Arguments The skill instructs the LLM to construct shell commands by directly embedding user-provided input (phone numbers, verification codes, account names) into command-line arguments. This creates a severe command injection vulnerability. A malicious user could inject arbitrary shell commands by crafting their input to include shell metacharacters (e.g., `&&`, `||`, `;`, `$(...)`, `` ` ``). When the LLM executes these commands, the injected code would run with the permissions of the skill's execution environment. The LLM should be instructed to pass user input as distinct arguments to the command execution environment, rather than concatenating them into a single shell string. For example, if using a `subprocess.run` equivalent, arguments should be passed as a list of strings (e.g., `['python', 'scripts/cli.py', 'send-code', '--phone', user_phone_number]`). This prevents shell interpretation of user-provided data. Alternatively, the `cli.py` script should robustly sanitize or escape all user-provided arguments before use, though preventing shell injection at the execution boundary is generally safer. | Static | SKILL.md:108 | |
| CRITICAL | Command Injection via User Input in CLI Arguments The skill instructs the LLM to construct shell commands by directly embedding user-provided input (phone numbers, verification codes, account names) into command-line arguments. This creates a severe command injection vulnerability. A malicious user could inject arbitrary shell commands by crafting their input to include shell metacharacters (e.g., `&&`, `||`, `;`, `$(...)`, `` ` ``). When the LLM executes these commands, the injected code would run with the permissions of the skill's execution environment. The LLM should be instructed to pass user input as distinct arguments to the command execution environment, rather than concatenating them into a single shell string. For example, if using a `subprocess.run` equivalent, arguments should be passed as a list of strings (e.g., `['python', 'scripts/cli.py', 'verify-code', '--code', user_verification_code]`). This prevents shell interpretation of user-provided data. Alternatively, the `cli.py` script should robustly sanitize or escape all user-provided arguments before use, though preventing shell injection at the execution boundary is generally safer. | Static | SKILL.md:124 | |
| CRITICAL | Command Injection via User Input in CLI Arguments The skill instructs the LLM to construct shell commands by directly embedding user-provided input (phone numbers, verification codes, account names) into command-line arguments. This creates a severe command injection vulnerability. A malicious user could inject arbitrary shell commands by crafting their input to include shell metacharacters (e.g., `&&`, `||`, `;`, `$(...)`, `` ` ``). When the LLM executes these commands, the injected code would run with the permissions of the skill's execution environment. The LLM should be instructed to pass user input as distinct arguments to the command execution environment, rather than concatenating them into a single shell string. For example, if using a `subprocess.run` equivalent, arguments should be passed as a list of strings (e.g., `['python', 'scripts/cli.py', 'add-account', '--name', account_name]`). This prevents shell interpretation of user-provided data. Alternatively, the `cli.py` script should robustly sanitize or escape all user-provided arguments before use, though preventing shell injection at the execution boundary is generally safer. | Static | SKILL.md:146 | |
| CRITICAL | Command Injection via User Input in CLI Arguments The skill instructs the LLM to construct shell commands by directly embedding user-provided input (phone numbers, verification codes, account names) into command-line arguments. This creates a severe command injection vulnerability. A malicious user could inject arbitrary shell commands by crafting their input to include shell metacharacters (e.g., `&&`, `||`, `;`, `$(...)`, `` ` ``). When the LLM executes these commands, the injected code would run with the permissions of the skill's execution environment. The LLM should be instructed to pass user input as distinct arguments to the command execution environment, rather than concatenating them into a single shell string. For example, if using a `subprocess.run` equivalent, arguments should be passed as a list of strings (e.g., `['python', 'scripts/cli.py', '--account', selected_account, 'check-login']`). This prevents shell interpretation of user-provided data. Alternatively, the `cli.py` script should robustly sanitize or escape all user-provided arguments before use, though preventing shell injection at the execution boundary is generally safer. | Static | SKILL.md:165 | |
| CRITICAL | Command Injection via User Input in CLI Arguments The skill instructs the LLM to construct shell commands by directly embedding user-provided input (phone numbers, verification codes, account names) into command-line arguments. This creates a severe command injection vulnerability. A malicious user could inject arbitrary shell commands by crafting their input to include shell metacharacters (e.g., `&&`, `||`, `;`, `$(...)`, `` ` ``). When the LLM executes these commands, the injected code would run with the permissions of the skill's execution environment. The LLM should be instructed to pass user input as distinct arguments to the command execution environment, rather than concatenating them into a single shell string. For example, if using a `subprocess.run` equivalent, arguments should be passed as a list of strings (e.g., `['python', 'scripts/cli.py', 'set-default-account', '--name', account_name]`). This prevents shell interpretation of user-provided data. Alternatively, the `cli.py` script should robustly sanitize or escape all user-provided arguments before use, though preventing shell injection at the execution boundary is generally safer. | Static | SKILL.md:172 | |
| CRITICAL | Command Injection via User Input in CLI Arguments The skill instructs the LLM to construct shell commands by directly embedding user-provided input (phone numbers, verification codes, account names) into command-line arguments. This creates a severe command injection vulnerability. A malicious user could inject arbitrary shell commands by crafting their input to include shell metacharacters (e.g., `&&`, `||`, `;`, `$(...)`, `` ` ``). When the LLM executes these commands, the injected code would run with the permissions of the skill's execution environment. The LLM should be instructed to pass user input as distinct arguments to the command execution environment, rather than concatenating them into a single shell string. For example, if using a `subprocess.run` equivalent, arguments should be passed as a list of strings (e.g., `['python', 'scripts/cli.py', 'remove-account', '--name', account_name]`). This prevents shell interpretation of user-provided data. Alternatively, the `cli.py` script should robustly sanitize or escape all user-provided arguments before use, though preventing shell injection at the execution boundary is generally safer. | Static | SKILL.md:173 | |
| MEDIUM | Reliance on Unspecified External Script (Supply Chain Risk) The skill's core functionality relies entirely on an external Python script, `scripts/cli.py`, whose content is not provided within the skill package context. This introduces a significant supply chain risk. If `scripts/cli.py` is malicious, vulnerable, or compromised, the entire skill's security is undermined, potentially leading to data exfiltration, command execution, or other attacks. Without visibility into this script, its security posture cannot be assessed. Provide the source code for `scripts/cli.py` within the skill package for security review. Implement robust input validation and sanitization within `cli.py` for all arguments received. Ensure all dependencies used by `cli.py` are explicitly pinned to specific versions to mitigate risks from malicious updates. | Static | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/0b340e2b0d915f32)
Powered by SkillShield