Trust Assessment
husky-gen received a trust score of 28/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 3 critical, 0 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, Unpinned npm dependency version, Prompt Injection leading to Arbitrary Code Execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Node.js child_process require Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/lxgicstudios/husky-config-gen/dist/index.js:12 | |
| CRITICAL | Prompt Injection leading to Arbitrary Code Execution The skill takes user-controlled content (`package.json`) and directly passes it as a 'user' message to an OpenAI LLM. The LLM's system prompt instructs it to generate shell scripts for git hooks. An attacker can embed malicious instructions or shell commands within their `package.json` to manipulate the LLM into generating arbitrary, harmful shell scripts. These scripts are then written to executable files on the user's system, leading to arbitrary code execution. Implement strict validation and sanitization of the LLM's output before writing it to executable files. Ideally, the LLM should generate a structured configuration that is then interpreted by a trusted, local script, rather than generating raw shell commands. Alternatively, require explicit user approval for generated scripts or execute them in a sandboxed environment. Do not directly pass untrusted user input to an LLM that is instructed to generate executable code. | LLM | src/index.ts:20 | |
| CRITICAL | Command Injection via LLM-generated Executable Scripts The skill generates git hook scripts using an LLM based on user-provided `package.json` content. The `installHooks` function then directly writes these LLM-generated scripts to files (`.husky/pre-commit`, `.husky/pre-push`, `.husky/commit-msg`) and sets them as executable (`chmodSync(path, "755")`). If an attacker can manipulate the LLM (e.g., via prompt injection in `package.json`), they can cause the LLM to generate malicious shell commands. These commands will then be executed on the user's system when the corresponding git hook is triggered, or simply by the act of making them executable. Never directly write LLM-generated content to executable files without rigorous validation, sanitization, and ideally, human review. Instead of generating raw shell scripts, consider having the LLM generate a structured data format that is then processed by a trusted, locally-controlled script. Implement a whitelist of allowed commands or patterns if direct script generation is unavoidable, and ensure all generated commands are properly escaped and sandboxed. | LLM | src/index.ts:35 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/husky-config-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/22d7255e5b325990)
Powered by SkillShield