Security Audit
autoclaw-cc/xiaohongshu-skills:skills/xhs-content-ops
github.com/autoclaw-cc/xiaohongshu-skillsTrust Assessment
autoclaw-cc/xiaohongshu-skills:skills/xhs-content-ops received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized User Input in CLI Arguments, Arbitrary File Read/Exfiltration via Image Paths in Publish Command.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 11, 2026 (commit c26fa986). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Unsanitized User Input in CLI Arguments The skill instructs the LLM to construct shell commands by interpolating user-provided keywords and other parameters into `python scripts/cli.py` calls (e.g., `--keyword "目标关键词"`). If the LLM does not properly quote or escape user input before passing it as an argument to `cli.py`, a malicious user could inject additional command-line arguments or even shell commands. Although the examples show quoted strings, there's no explicit instruction for the LLM to enforce this for all user-provided input, making it susceptible to prompt injection attempts that manipulate the command execution. Explicitly instruct the LLM to always quote and escape user-provided strings when constructing command-line arguments. For example, add an instruction like: 'When incorporating user input into command arguments, ensure the input is enclosed in double quotes and any internal double quotes or special shell characters are properly escaped to prevent command injection.' Alternatively, consider using a safer method for passing complex or untrusted input, such as temporary files or standard input, if `cli.py` supports it. | LLM | SKILL.md:68 | |
| HIGH | Arbitrary File Read/Exfiltration via Image Paths in Publish Command The `publish` command explicitly allows specifying absolute paths for images (e.g., `--images "/abs/path/pic1.jpg"`). If a malicious user can trick the LLM into providing a path to a sensitive file (e.g., `/etc/passwd`, `~/.ssh/id_rsa`) instead of an intended image, the skill could be used to read and exfiltrate the content of that file by publishing it to Xiaohongshu. While publishing requires user confirmation, this confirmation is typically for the content of the post, not the source file paths, making it a subtle exfiltration vector. Restrict the LLM's ability to construct arbitrary file paths for image uploads. Implement strict validation within `cli.py` to ensure that provided file paths are within an allowed, sandboxed directory, or that they correspond to legitimate image file types and are not sensitive system files. Additionally, explicitly instruct the LLM to only use paths to user-provided or skill-generated image files, and never to system paths or paths outside a designated safe directory. | LLM | SKILL.md:120 |
Scan History
Embed Code
[](https://skillshield.io/report/60b4ddb6e4f5e032)
Powered by SkillShield