Trust Assessment
diagram-gen received a trust score of 51/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 1 high, 3 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Unpinned npm dependency version, Prompt Injection via User-Controlled File Content.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User-Controlled File Content The `generateDiagram` function reads content from user-specified local files and directly inserts this content into the `user` message of the OpenAI API call. A malicious actor could craft a file containing prompt injection instructions, attempting to manipulate the LLM's behavior, extract sensitive information from the system prompt, or generate unintended outputs. Implement robust input sanitization and validation for the `summary` content before sending it to the LLM. Consider using a separate, isolated LLM call for sanitization, or strictly define the expected input format and reject anything outside of it. If possible, use a less powerful model for initial processing or employ techniques like XML tagging or JSON schema validation for structured prompts to prevent arbitrary instruction injection. | LLM | src/index.ts:30 | |
| HIGH | Data Exfiltration to External AI Service The `collectFiles` function reads content from files within a user-specified directory (`<dir>` argument). The `generateDiagram` function then takes this content (up to 12000 characters) and sends it directly to the OpenAI API. This means that any sensitive information (e.g., credentials, private keys, proprietary code) present in the files within the user-provided directory could be exfiltrated to OpenAI's servers. 1. **Restrict File Access**: Limit the directories the tool can access (e.g., only allow subdirectories of the current working directory, or explicitly whitelist/blacklist paths). 2. **Content Filtering**: Implement client-side filtering or redaction of potentially sensitive patterns (e.g., API keys, private keys, common credential formats) before sending data to the LLM. 3. **User Consent/Warning**: Clearly inform the user that file contents will be sent to an external AI service and prompt for explicit consent, especially when processing potentially sensitive directories. 4. **Least Privilege**: Run the tool with the minimum necessary file system permissions. | LLM | src/index.ts:26 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/lxgicstudios/diagram-gen/dist/index.js:23 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/diagram-gen/package.json | |
| MEDIUM | Arbitrary File Write via User-Controlled Output Path The `cli.js` script allows the user to specify an arbitrary output file path via the `-o` or `--output` option. The `writeFileSync` function is then used to write the generated diagram to this path. If the process running the skill has elevated permissions, a malicious user could exploit this to overwrite critical system files (e.g., `/etc/passwd`) or user configuration files (e.g., `~/.bashrc`, `~/.ssh/authorized_keys`), leading to denial of service, privilege escalation, or unauthorized access. 1. **Path Validation**: Restrict the output path to be within a designated safe directory (e.g., the current working directory or a temporary directory). Prevent absolute paths or paths containing `..` to escape the intended directory. 2. **Permission Management**: Ensure the skill runs with the principle of least privilege, limiting its write access to only necessary locations. 3. **Confirmation**: For potentially sensitive write operations, prompt the user for confirmation. | LLM | src/cli.ts:20 |
Scan History
Embed Code
[](https://skillshield.io/report/724d26c21aed05f9)
Powered by SkillShield