Trust Assessment
prefetcher received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Prompt Injection via User-Provided File Content, Data Exfiltration via Third-Party API.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User-Provided File Content The skill directly incorporates content from user-provided files into the LLM's 'user' message without sanitization or validation. An attacker can craft malicious code files (e.g., .js, .ts) within the scanned directory containing instructions designed to manipulate the LLM's behavior, override system prompts, or extract information. The `substring(0, 60000)` truncation is insufficient to prevent targeted injections. Implement robust sanitization and validation of user-provided file content before it is sent to the LLM. Consider using a separate, isolated LLM call for analyzing untrusted content, or strictly limiting the LLM's capabilities when processing such input. Alternatively, use a technique like 'sandwich prompting' where untrusted input is placed between strong system instructions and a final instruction to ignore previous user input. | LLM | src/index.ts:30 | |
| HIGH | Data Exfiltration via Third-Party API The `scanPages` function reads the content of arbitrary files (with specified extensions) from a user-provided directory. This content is then concatenated and sent to the OpenAI API. This poses a significant data exfiltration risk, as sensitive code, configuration, or proprietary information present in the scanned files could be transmitted to a third-party service. While the `glob` patterns limit file types, these types can still contain sensitive data. Minimize the amount of user data sent to external APIs. If sending file content is necessary, ensure explicit user consent for data transmission. Implement strict data filtering to remove sensitive information (e.g., API keys, credentials, PII) before sending it to the LLM. Consider processing sensitive data locally or using a privacy-preserving LLM deployment. | LLM | src/index.ts:15 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/prefetcher/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/97d5a9052fc33cb6)
Powered by SkillShield