Trust Assessment
prefetch-suggester received a trust score of 60/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Local file contents sent to external AI service, User-controlled file content used in LLM prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Local file contents sent to external AI service The skill reads the content of local files (JavaScript, TypeScript, JSX, TSX, Vue, Svelte) from a user-specified directory and sends them to the OpenAI API. This poses a significant risk of exfiltrating sensitive code, configuration, or proprietary information to a third-party service. The `OPENAI_API_KEY` is used to authenticate this data transmission. Implement strict filtering or sanitization of file contents before sending to the LLM. Consider using a local LLM or a highly restricted API endpoint for sensitive data. Explicitly inform users about data transmission and obtain consent. Limit the scope of files read to only what is absolutely necessary and non-sensitive. | LLM | src/index.ts:29 | |
| HIGH | User-controlled file content used in LLM prompt The content of local files, which can be controlled by the user (or an attacker if they can place files in the scanned directory), is directly inserted into the `user` message of the OpenAI API call. This allows for prompt injection attacks where malicious instructions embedded in the scanned files could manipulate the LLM's behavior, potentially leading to unintended actions, disclosure of the system prompt, or generation of harmful content. Implement robust input sanitization and validation for all user-controlled content before it is passed to the LLM. Consider using techniques like prompt templating, content filtering, or separating user input from system instructions more clearly. If possible, process user content with a separate, less privileged LLM call or a local parser before feeding it to the main LLM. | LLM | src/index.ts:30 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/prefetch-gen/package.json | |
| MEDIUM | Broad file system access for scanning The `scanPages` function uses a broad `glob` pattern (`**/*.{js,ts,jsx,tsx,vue,svelte}`) to read files from a user-specified directory. While the intent is to scan code, this broad access could inadvertently read sensitive files (e.g., configuration files, private keys if they happen to match the extension) if the user provides a directory containing such data, increasing the attack surface for data exfiltration. Narrow down the `glob` pattern to be more specific to known application code files. Implement additional checks to filter out potentially sensitive file names or paths. Clearly document the scope of file access to users. | LLM | src/index.ts:16 |
Scan History
Embed Code
[](https://skillshield.io/report/c5f4fa305b82579b)
Powered by SkillShield