Trust Assessment
error-handler-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 0 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, User input directly injected into LLM prompt, Arbitrary file write of LLM-generated content.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | User input directly injected into LLM prompt The `framework` and `lang` parameters, which are user-controlled CLI arguments, are directly interpolated into the system and user messages sent to the OpenAI API. This allows a malicious user to inject instructions into the prompt, potentially manipulating the LLM to generate arbitrary or harmful code instead of error handling middleware. Sanitize or strictly validate user inputs (`framework`, `lang`) before incorporating them into the LLM prompt. Consider using a structured input format for the LLM or a tool-calling approach to separate user data from instructions. | LLM | src/index.ts:10 | |
| CRITICAL | Arbitrary file write of LLM-generated content The `ai-error-handler` CLI tool writes the LLM-generated code directly to a file specified by the user via the `--output` option. The `path.resolve` function does not prevent writing to arbitrary locations if an absolute path or path traversal sequences (`../`) are provided by the user. Combined with the prompt injection vulnerability, this allows an attacker to generate malicious code and write it to sensitive system locations, potentially leading to command execution or data exfiltration. Restrict the `--output` path to a safe, designated directory (e.g., a subdirectory of the current working directory) and prevent writing to absolute paths or paths containing traversal sequences. Implement strict path validation. | LLM | src/cli.ts:22 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/error-handler-gen/package.json | |
| MEDIUM | Unpinned dependencies in package.json The `package.json` file specifies dependencies using semver ranges (e.g., `^12.1.0`, `^4.73.0`). While `package-lock.json` pins specific versions, a fresh install without the lock file, or an update, could pull in newer, potentially vulnerable versions of these packages. This increases the risk of supply chain attacks or introducing known vulnerabilities. Pin all dependencies to exact versions (e.g., `12.1.0` instead of `^12.1.0`) in `package.json` to ensure deterministic builds and prevent unexpected dependency updates. Regularly audit and update dependencies. | LLM | package.json:13 |
Scan History
Embed Code
[](https://skillshield.io/report/46b7150d34528776)
Powered by SkillShield