Security Audit
autoclaw-cc/xiaohongshu-skills:skills/xhs-explore
github.com/autoclaw-cc/xiaohongshu-skillsTrust Assessment
autoclaw-cc/xiaohongshu-skills:skills/xhs-explore received a trust score of 51/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Prompt Injection Attempt in Skill Boundary, Potential Command Injection via User-Controlled Arguments, Exposure of Security Token (xsec_token) to LLM.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 11, 2026 (commit c26fa986). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection Attempt in Skill Boundary The '🔒 技能边界(强制)' section contains explicit instructions designed to manipulate the host LLM's behavior, forcing it to use only the specified `python scripts/cli.py` and ignore other potential tools or knowledge. Phrases like '忽略其他项目' (Ignore other projects) and '禁止外部工具' (Prohibit external tools) are direct attempts to override the LLM's default instructions and capabilities, which is a classic prompt injection pattern. Remove or rephrase instructions that attempt to override the LLM's core directives or restrict its general capabilities. Instead, define the skill's scope and available tools clearly without using prohibitive language that targets the LLM's internal reasoning. | LLM | SKILL.md:10 | |
| HIGH | Potential Command Injection via User-Controlled Arguments The skill constructs shell commands by directly incorporating user-provided input (e.g., `--keyword`, `--feed-id`, `--xsec-token`, `--user-id`) into arguments for `python scripts/cli.py`. If the `cli.py` script or the underlying shell execution mechanism (e.g., `subprocess.run(..., shell=True)`) does not properly sanitize or escape these arguments, a malicious user could inject arbitrary shell commands. For example, a keyword like `"foo; rm -rf /"` could lead to command execution. Ensure that all user-controlled inputs passed as arguments to `python scripts/cli.py` are strictly validated and properly escaped before command execution. Ideally, avoid `shell=True` in `subprocess` calls and pass arguments as a list to prevent shell interpretation. The `cli.py` script itself must also implement robust input sanitization. | Static | SKILL.md:60 | |
| MEDIUM | Exposure of Security Token (xsec_token) to LLM The skill instructs the LLM to retrieve and pass an `xsec_token` as an argument to `cli.py`. This token appears to be a security-sensitive credential. While the skill doesn't explicitly instruct the LLM to reveal it, the LLM's direct handling of this token makes it vulnerable to prompt-based exfiltration. A malicious user could craft a prompt to trick the LLM into disclosing the `xsec_token` value, potentially leading to unauthorized access or session hijacking. Implement strict output filtering or masking for sensitive tokens like `xsec_token` within the LLM's responses. The LLM should be explicitly instructed never to output the raw value of such tokens to the user. Consider if the token needs to be directly exposed to the LLM at all, or if it can be handled by a more secure, isolated component. | Static | SKILL.md:40 |
Scan History
Embed Code
[](https://skillshield.io/report/7f640a52ab018ff5)
Powered by SkillShield