Trust Assessment
dark-mode received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 1 high, 2 medium, and 1 low severity. Key findings include Unpinned npm dependency version, Prompt Injection via User File Content, Data Exfiltration to Third-Party LLM.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 46/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User File Content The skill directly embeds the full content of user-provided files into the 'user' role message sent to the OpenAI API. Malicious content within these files could contain instructions designed to manipulate the LLM's behavior, potentially overriding system prompts, generating harmful outputs, or performing unintended actions. Implement robust input sanitization and validation for user-provided file content before sending it to the LLM. Consider using structured inputs (e.g., JSON mode with strict schema) or separating user content from instructions more clearly to reduce the attack surface. Add specific LLM guardrails to detect and reject malicious prompts. | LLM | src/index.ts:20 | |
| HIGH | Data Exfiltration to Third-Party LLM The skill reads the entire content of user-specified files and transmits this data to the OpenAI API for processing. This constitutes data exfiltration, as sensitive, proprietary, or personally identifiable information (PII) present in the user's files will be sent to a third-party service (OpenAI). Clearly inform users that their file content will be sent to a third-party LLM provider. Advise against processing files containing sensitive data. Explore options for local processing, data redaction, or anonymization if feasible, or offer an on-premise LLM option for sensitive workloads. | LLM | src/index.ts:18 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/dark-mode/package.json | |
| MEDIUM | Broad File System Read/Write Permissions The skill is designed to read and write arbitrary files and directories specified by the user. While necessary for its functionality, this broad access, especially when combined with the data exfiltration vulnerability, means the skill can read any file the user has access to and send its content to a third-party LLM. It can also overwrite any file the user has write permissions for with LLM-generated content. Document the full scope of file system access and the implications of processing sensitive files. While restricting access for a file modification tool is challenging, users should be fully aware of the potential risks. Consider implementing additional checks or warnings for highly sensitive file types or locations. | LLM | src/cli.ts:29 | |
| LOW | Unpinned Dependencies in package.json The `package.json` file uses caret (`^`) ranges for dependencies (e.g., `"openai": "^4.73.0"`). While `package-lock.json` pins exact versions for reproducible builds, using caret ranges in `package.json` allows for automatic updates to minor or patch versions when `npm install` is run without a lockfile, potentially introducing new vulnerabilities or breaking changes if a dependency releases a malicious or faulty update. Pin exact versions for all dependencies in `package.json` (e.g., `"openai": "4.73.0"`) to ensure consistent and predictable builds. Alternatively, use a tool like Dependabot or Renovate to monitor for security vulnerabilities in locked dependencies and manage updates. | LLM | package.json:12 |
Scan History
Embed Code
[](https://skillshield.io/report/fee39d408902df1c)
Powered by SkillShield