Trust Assessment
dark-mode-gen received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, User file content used directly in LLM prompt, enabling prompt injection, Skill has broad write access to user-specified files and directories.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | User file content used directly in LLM prompt, enabling prompt injection The skill sends the full content of the user's source files directly to the OpenAI API as part of the user prompt. A malicious user could embed prompt injection instructions within their source code (e.g., comments, string literals) to manipulate the LLM's behavior. If the LLM is successfully injected, it could be coerced into generating arbitrary or malicious code, which the skill then writes back to the user's file system, potentially corrupting source files or introducing vulnerabilities. Implement robust input sanitization or a more structured way to pass file content to the LLM that prevents arbitrary instruction injection. Consider using a separate, isolated context for the file content or employing techniques like XML/JSON tagging with strict parsing to delineate trusted instructions from untrusted content. Alternatively, use a model that is more resistant to prompt injection or add a safety layer to validate the LLM's output before writing to disk. | LLM | src/index.ts:20 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/dark-mode-gen/package.json | |
| MEDIUM | Skill has broad write access to user-specified files and directories The skill is designed to read and overwrite files and directories specified by the user. While necessary for its functionality, this grants the skill significant power to modify user's source code. If combined with a successful prompt injection attack (where the LLM generates malicious code), this could lead to corruption of user files or introduction of vulnerabilities into their codebase. The skill does not implement any sandboxing or strict validation of the output before writing. Implement a more robust validation mechanism for the AI's output before writing to disk. Consider offering an interactive review step for users, especially when processing directories, or a diff view. For critical applications, consider running the skill in a sandboxed environment or with more restrictive file system permissions. | LLM | src/cli.ts:33 | |
| INFO | User source code transmitted to OpenAI API The skill reads the full content of user-specified files and directories and transmits this code to the OpenAI API for processing. While this is the core functionality of an AI-powered code modification tool, users should be aware that their proprietary or sensitive source code will be sent to a third-party service (OpenAI). Clearly state in the skill's documentation (e.g., `SKILL.md`) that user code is sent to OpenAI for processing. Advise users to review OpenAI's data privacy policies and consider the implications for sensitive projects. | LLM | src/index.ts:17 |
Scan History
Embed Code
[](https://skillshield.io/report/ba23c1a88961ac26)
Powered by SkillShield