Trust Assessment
bundle-checker received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, User-controlled file content directly injected into LLM prompt, User project configuration files sent to external AI service.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | User-controlled file content directly injected into LLM prompt The skill reads `package.json` and various build configuration files (e.g., `webpack.config.js`, `vite.config.ts`) from a user-specified directory. The content of these files is then directly concatenated into the `context` string, which is sent as a `user` message to the OpenAI `gpt-4o-mini` model. A malicious user can craft these files with prompt injection payloads to attempt to manipulate the LLM's instructions, bypass safety mechanisms, or extract information. Implement robust sanitization or a structured data format (e.g., JSON schema validation) for user-provided file content before it is included in the LLM prompt. Consider using a separate, isolated LLM call for potentially untrusted content or employing techniques like XML/JSON tagging to delineate trusted instructions from untrusted data. Ensure that the LLM's system prompt is sufficiently robust to resist manipulation. | LLM | src/index.ts:30 | |
| HIGH | User project configuration files sent to external AI service The skill reads the full content of `package.json` and several build configuration files (e.g., `webpack.config.js`, `vite.config.ts`, `tsconfig.json`) from the user's project directory. This content is then transmitted to the OpenAI API for analysis. While intended for legitimate analysis, these files can inadvertently contain sensitive information such as internal repository URLs, private package names, build secrets, or environment variable placeholders, leading to potential data exfiltration to a third-party service. Review the necessity of sending the *full content* of all these files. Consider extracting only relevant metadata (e.g., dependency names, versions, specific config flags) rather than the entire file content. Implement a clear warning to the user about data transmission and provide options to redact sensitive information or review the data before sending. Ensure that no actual secrets (like API keys) are ever stored directly in these configuration files. | LLM | src/index.ts:19 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/bundle-checker/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/770a819789794646)
Powered by SkillShield