Trust Assessment
cache-strategy received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Local file content sent to external LLM, Untrusted local file content used in LLM prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Local file content sent to external LLM The skill reads the content of local code files from a user-specified directory and sends this content directly to the OpenAI API. This poses a significant data exfiltration risk, as sensitive information (e.g., proprietary code, internal comments, hardcoded credentials, API keys) present in these files could be exposed to a third-party service (OpenAI). Implement explicit user consent for each file or sensitive data type before sending to an external API. Consider client-side analysis or anonymization techniques. Clearly document the data transmission to users. | LLM | src/index.ts:19 | |
| CRITICAL | Untrusted local file content used in LLM prompt The skill constructs an LLM prompt by directly concatenating the content of local code files, which are user-controlled input, into the 'user' message. This allows for prompt injection attacks where malicious instructions embedded within a scanned file could manipulate the LLM's behavior, potentially leading to data leakage, generation of harmful content, or other unintended actions. Sanitize or strictly validate user-controlled content before incorporating it into LLM prompts. Consider using techniques like input/output parsing, content filtering, or dedicated prompt injection defenses. Isolate untrusted content from system instructions. | LLM | src/index.ts:25 | |
| HIGH | Excessive filesystem read permissions for LLM input The skill uses `glob` to read all `.js`, `.ts`, `.jsx`, and `.tsx` files within a user-specified directory. While there's an ignore list for common sensitive directories (`node_modules`, `dist`, `.git`), the broad scope of file types and the user-controlled base directory (`cwd: dir`) mean the skill can access and process a wide range of potentially sensitive code files. This excessive read permission, combined with the subsequent transmission to an external LLM, amplifies the data exfiltration and prompt injection risks. Minimize the scope of files read to only what is strictly necessary for the skill's function. Provide more granular control to the user over which files or directories are included/excluded. Implement a whitelist approach for file types and paths rather than a broad glob pattern. | LLM | src/index.ts:13 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/cache-strategy-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/b4417700d6524f93)
Powered by SkillShield