Trust Assessment
xhs-note-creator received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 14 findings: 0 critical, 2 high, 10 medium, and 1 low severity. Key findings include Unsafe deserialization / dynamic eval, Suspicious import: requests, Unpinned npm dependency version.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Dependency Graph layer scored lowest at 49/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings14
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | XHS_COOKIE can be exfiltrated via configurable API endpoint The `scripts/publish_xhs.py` script reads the sensitive `XHS_COOKIE` from environment variables or a `.env` file. When `--api-mode` is enabled, this cookie is sent to an API endpoint configured by the `XHS_API_URL` environment variable (defaulting to `http://localhost:5005`). If an attacker can manipulate the `XHS_API_URL` environment variable (e.g., through prompt injection into the LLM that invokes this script), they could redirect the cookie to a malicious server, leading to credential exfiltration. 1. Restrict `XHS_API_URL`: If `--api-mode` is necessary, strictly whitelist allowed API endpoints for `XHS_API_URL` or ensure it's a fixed, trusted internal service. 2. Avoid `--api-mode` for sensitive data: If possible, prefer the local publishing mode which uses the `xhs` library directly, reducing the attack surface of a configurable external API. 3. Secure Environment Variable Handling: Ensure that the LLM environment does not allow untrusted input to directly set or override sensitive environment variables like `XHS_API_URL`. | LLM | scripts/publish_xhs.py:100 | |
| HIGH | Untrusted input in script arguments can lead to command injection The `SKILL.md` instructs the LLM to invoke Python scripts (`render_xhs.py`, `publish_xhs.py`) and Node.js scripts (`render_xhs.js`) with arguments like `<markdown_file>`, `--title`, `--desc`, and `--images`. If the LLM's output for these arguments is directly passed to a shell command without proper sanitization or escaping, a malicious LLM response (prompt injection) could inject arbitrary shell commands. For example, an attacker could craft a markdown file name like `malicious.md; rm -rf /;` or an image path like `image.png $(cat /etc/passwd)`. 1. Strict Input Validation: Implement robust validation and sanitization for all arguments passed to shell commands. 2. Argument Escaping: When constructing shell commands, ensure all user-provided arguments are properly escaped for the target shell (e.g., using `shlex.quote` in Python). 3. Avoid Shell=True: If using `subprocess.run` or similar, avoid `shell=True` and pass arguments as a list. 4. Least Privilege: Ensure the execution environment for these scripts has minimal necessary permissions. | LLM | SKILL.md:68 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/wusyx/auto-redbook-skills/scripts/render_xhs.js:163 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/wusyx/auto-redbook-skills/scripts/publish_xhs.py:36 | |
| MEDIUM | Unpinned npm dependency version Dependency 'js-yaml' is not pinned to an exact version ('^4.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/wusyx/auto-redbook-skills/package.json | |
| MEDIUM | Unpinned Python dependency version Requirement 'markdown>=3.4.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | skills/wusyx/auto-redbook-skills/requirements.txt:4 | |
| MEDIUM | Unpinned Python dependency version Requirement 'PyYAML>=6.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | skills/wusyx/auto-redbook-skills/requirements.txt:5 | |
| MEDIUM | Unpinned Python dependency version Requirement 'playwright>=1.40.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | skills/wusyx/auto-redbook-skills/requirements.txt:8 | |
| MEDIUM | Unpinned Python dependency version Requirement 'xhs>=0.4.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | skills/wusyx/auto-redbook-skills/requirements.txt:11 | |
| MEDIUM | Unpinned Python dependency version Requirement 'python-dotenv>=1.0.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | skills/wusyx/auto-redbook-skills/requirements.txt:14 | |
| MEDIUM | Unpinned Python dependency version Requirement 'requests>=2.28.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | skills/wusyx/auto-redbook-skills/requirements.txt:17 | |
| MEDIUM | Malicious Markdown can lead to XSS in browser rendering context The rendering scripts (`scripts/render_xhs.py`, `scripts/render_xhs.js`, and their `_v2` versions) take Markdown content, convert it to HTML, and then load this HTML into a Playwright browser instance using `page.set_content()` or `page.setContent()`. If the LLM generates malicious Markdown that includes HTML/JavaScript payloads (e.g., `<script>alert('XSS')</script>`), this code could be executed within the sandboxed browser context. While this doesn't directly affect the host system, it could potentially be used to exploit browser vulnerabilities, make requests to internal services if the browser has such access, or craft specific images that could be used in other attacks. 1. Sanitize HTML Output: Before passing the generated HTML to `page.setContent()`, sanitize it to remove potentially malicious tags and attributes (e.g., using a library like `DOMPurify` for Node.js or `Bleach` for Python). 2. Strict Markdown Parser: Configure the Markdown parser (`marked` or `markdown`) to be more restrictive, disallowing raw HTML or specific dangerous tags. 3. Isolate Browser Context: Ensure the Playwright browser context is as isolated as possible, with no network access to sensitive internal resources. | LLM | scripts/render_xhs.js:235 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/wusyx/auto-redbook-skills/package.json | |
| INFO | Documentation suggests unpinned dependencies, increasing supply chain risk While `requirements.txt` and `package.json` correctly pin dependency versions, the `SKILL.md` and comments within `scripts/publish_xhs.py` suggest installing dependencies without specific version pinning (e.g., `pip install xhs python-dotenv requests`). This discrepancy could lead users to install the latest, potentially vulnerable, versions of libraries if they follow the documentation's instructions directly instead of using the pinned `requirements.txt`. 1. Update Documentation: Ensure all installation instructions in `SKILL.md` and code comments explicitly refer to `requirements.txt` (for Python) and `package.json` (for Node.js) for dependency management, or provide pinned versions in the instructions themselves. 2. Consistent Instructions: Maintain consistency between documentation and actual dependency files. | LLM | SKILL.md:158 |
Scan History
Embed Code
[](https://skillshield.io/report/a4b040c4f3e31350)
Powered by SkillShield