Trust Assessment
husky-gen received a trust score of 28/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 3 critical, 0 high, 2 medium, and 1 low severity. Key findings include Arbitrary command execution, Unpinned npm dependency version, Prompt Injection via User-Controlled Input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 31/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Node.js child_process require Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/lxgicstudios/husky-gen/dist/index.js:12 | |
| CRITICAL | Prompt Injection via User-Controlled Input The skill directly passes user-controlled content from `package.json` as the `user` message to the OpenAI API. A malicious `package.json` could be crafted to attempt prompt injection, manipulating the LLM's instructions to generate harmful or unintended git hook scripts. While the system prompt attempts to constrain the LLM's output to JSON, sophisticated injection techniques could potentially bypass these safeguards, leading to the generation of malicious code. Implement robust input validation and sanitization for `pkgContent` before sending it to the LLM. Consider using a more structured input format or a dedicated prompt engineering technique (e.g., few-shot examples, input parsing) to reduce the attack surface. If direct user input must be sent, ensure the LLM's system prompt is highly resilient to adversarial inputs and that the LLM's output is thoroughly validated before execution. | LLM | src/index.ts:28 | |
| CRITICAL | Command Injection via LLM-Generated Executable Scripts The skill writes LLM-generated content (git hook scripts for `pre-commit`, `pre-push`, and `commit-msg`) directly to files within the `.husky/` directory. These files are then made executable (`chmod 755`). If the LLM is successfully manipulated via prompt injection (e.g., from a malicious `package.json`), it could generate arbitrary shell commands within these scripts. These malicious commands would then be executed by the user's git client whenever the corresponding git hook is triggered, leading to a severe command injection vulnerability. Implement strict validation and sanitization of the LLM's output before writing it to executable files. Instead of directly writing LLM-generated scripts, consider generating only specific parameters or code snippets that are then integrated into predefined, safe script templates. Ensure that any LLM-generated commands are whitelisted or thoroughly checked against a set of allowed operations. Avoid direct execution of arbitrary LLM output. | LLM | src/index.ts:38 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/husky-gen/package.json | |
| MEDIUM | Data Exfiltration of Project Metadata to Third-Party AI The skill reads the entire `package.json` file from the user's project and sends its content to the OpenAI API for analysis and hook generation. While this is central to the skill's functionality, users should be aware that their project's metadata, including dependencies, scripts, and potentially private package names, is being transmitted to a third-party AI service. This could be a concern for projects with strict data privacy requirements. Clearly document that `package.json` content is sent to OpenAI. Provide an option for users to review or redact sensitive parts of their `package.json` before it's sent, or to use a local, privacy-preserving analysis mode if available. Consider hashing or anonymizing non-essential data if possible, though this might impact LLM effectiveness. | LLM | src/index.ts:28 | |
| LOW | Unpinned Dependencies in package.json The `package.json` file uses caret (`^`) ranges for its dependencies (e.g., `"openai": "^4.73.0"`). While `package-lock.json` provides exact versions for current installations, using caret ranges in `package.json` means that future installations could pull in newer minor or patch versions without explicit review. This introduces a slight risk of unexpected behavior or vulnerabilities if a new version of a dependency introduces breaking changes or security flaws. This is a common practice in Node.js but is not strictly 'pinned'. For maximum supply chain security, consider pinning dependencies to exact versions (e.g., `"openai": "4.73.0"`) or using tilde (`~`) ranges for patch-only updates. Regularly audit dependencies for known vulnerabilities using tools like `npm audit`. | LLM | package.json:7 |
Scan History
Embed Code
[](https://skillshield.io/report/7f518bbd398be9fa)
Powered by SkillShield