Trust Assessment
cache-strategy received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Untrusted input directly used in LLM prompt, Potential for sensitive data exfiltration to third-party LLM.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted input directly used in LLM prompt The content of files scanned from the user-provided directory (`codeChunks`) is directly concatenated and used as the `user` message in the OpenAI API call. This allows an attacker to inject arbitrary instructions into the LLM's prompt by crafting malicious content within the scanned files. The LLM could then be manipulated to ignore system instructions, reveal sensitive information, or generate unintended output. Implement a robust prompt templating strategy that clearly delineates untrusted user content from system instructions. For example, wrap the `combined` content in XML-like tags (e.g., `<user_code>...</user_code>`) and explicitly instruct the LLM in the system prompt to treat content within these tags as data to be analyzed, not as instructions to follow. Alternatively, use a tool-use pattern where the code is passed as an argument to a tool call. | LLM | src/index.ts:30 | |
| HIGH | Potential for sensitive data exfiltration to third-party LLM The skill reads the content of user-specified API files and sends them to the OpenAI API for analysis. While this is the intended functionality, if the user inadvertently points the tool to directories containing sensitive information (e.g., `.env` files, private keys, API credentials embedded in code comments, or other confidential data), this information could be transmitted to OpenAI. Although the `SKILL.md` mentions `OPENAI_API_KEY` and the purpose, the tool does not actively filter or redact potentially sensitive patterns from the file contents before transmission. 1. Enhance the `ignore` list in `glob` to include common sensitive file patterns (e.g., `.env`, `*.pem`, `*.key`, `credentials.json`). 2. Add explicit warnings in the CLI output or documentation about the risk of scanning directories containing sensitive data and advise users to only scan code intended for analysis. 3. Consider implementing basic redaction for common sensitive patterns (e.g., regex for API keys, tokens) before sending content to the LLM, though this is complex and prone to false positives/negatives. | LLM | src/index.ts:20 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/cache-strategy/package.json | |
| MEDIUM | Broad file system access for code analysis The `scanAPIFiles` function uses a broad `glob` pattern (`**/*.{js,ts,jsx,tsx}`) to read files within the user-specified directory. While an `ignore` list is present for common development artifacts, this broad access could lead to performance issues if a very large directory is scanned, or unintended reading of files that are not strictly "API code" but match the file extensions. This increases the attack surface for data exfiltration if sensitive files are present and not explicitly ignored. 1. Refine the `glob` pattern to be more specific if possible, or provide options for users to specify more granular file paths/patterns. 2. Implement a hard limit on the number of files or total content size that can be processed to prevent resource exhaustion. 3. Expand the `ignore` list to cover more potentially irrelevant or sensitive file types/paths. | LLM | src/index.ts:17 |
Scan History
Embed Code
[](https://skillshield.io/report/05d516fc5cf9b989)
Powered by SkillShield