Trust Assessment
api-docs-gen received a trust score of 51/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 1 critical, 2 high, 3 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Unpinned npm dependency version, Prompt Injection via User Code.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User Code The skill directly incorporates the content of user-provided route files into the 'user' message of the LLM prompt. A malicious user could embed prompt injection instructions within their route files (e.g., comments, string literals) to manipulate the LLM's behavior, leading to unintended outputs, disclosure of system prompts, or generation of harmful content. Implement robust input sanitization or a separate LLM call to classify and filter potentially malicious instructions from user-provided code before it's sent to the main documentation generation LLM. Consider using a 'sandwich' prompt or few-shot examples to reinforce the LLM's role and make it more resistant to adversarial instructions. | LLM | src/index.ts:30 | |
| HIGH | Data Exfiltration of User Code to Third-Party LLM The skill reads the full content of user-specified route files and sends them to the OpenAI API for processing. If these route files contain sensitive data (e.g., API keys, PII, proprietary algorithms, internal comments), this data will be transmitted to OpenAI's servers. While OpenAI has data privacy policies, this still represents a transfer of potentially sensitive user data to a third-party service, which may not align with all users' security or compliance requirements. Clearly inform users about the data transfer to OpenAI and the implications for sensitive data. Implement mechanisms to redact or filter sensitive information from the route files before sending them to the LLM, if feasible. Provide options for users to review or approve the content being sent, or to run the tool in a local-only mode if such an option becomes available. | LLM | src/index.ts:27 | |
| HIGH | Arbitrary File Overwrite via User-Controlled Output Path The output file path ('options.output') is directly controlled by the user via a command-line option and passed to 'writeFileSync' without sufficient validation. A malicious user could specify a path to a critical system file (e.g., '/etc/passwd', '~/.bashrc') or a sensitive configuration file, leading to arbitrary file overwrite. If combined with a prompt injection attack that causes the LLM to generate malicious content, this could lead to command injection upon subsequent execution of the overwritten file. Implement strict validation and sanitization of the output file path. Restrict output to a designated directory (e.g., a 'docs' folder within the project) and prevent path traversal (e.g., '..'). Confirm with the user before overwriting existing files, especially those outside the designated output directory. | LLM | src/cli.ts:20 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/lxgicstudios/api-docs-gen/dist/index.js:21 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/api-docs-gen/package.json | |
| MEDIUM | Unpinned Dependencies in package.json The 'package.json' file uses caret (^) ranges for all dependencies, allowing minor and patch version updates automatically. This introduces a supply chain risk where a malicious update to any dependency could be automatically installed and executed, potentially compromising the skill. While 'package-lock.json' pins versions, 'package.json' defines the acceptable range for new installations or updates. Pin dependencies to exact versions (e.g., "commander": "12.1.0") to ensure deterministic builds and prevent unexpected malicious updates. Regularly audit and update dependencies manually or through automated tools to mitigate risks from known vulnerabilities. | LLM | package.json:9 |
Scan History
Embed Code
[](https://skillshield.io/report/72ed5c43ba0bc880)
Powered by SkillShield