Security Audit
mike-coulbourn/claude-vibes:plugins/vibes/skills/hooks-builder
github.com/mike-coulbourn/claude-vibesTrust Assessment
mike-coulbourn/claude-vibes:plugins/vibes/skills/hooks-builder received a trust score of 0/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 19 findings: 8 critical, 10 high, 1 medium, and 0 low severity. Key findings include File read + network send exfiltration, Sensitive path access: AI agent config, Skill enables direct command injection via hook configuration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on March 22, 2026 (commit b6e9c9a1). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings19
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | plugins/vibes/skills/hooks-builder/SKILL.md:34 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | plugins/vibes/skills/hooks-builder/SKILL.md:158 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | plugins/vibes/skills/hooks-builder/SKILL.md:325 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | plugins/vibes/skills/hooks-builder/SKILL.md:342 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | plugins/vibes/skills/hooks-builder/SKILL.md:374 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | plugins/vibes/skills/hooks-builder/SKILL.md:378 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | plugins/vibes/skills/hooks-builder/SKILL.md:382 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | plugins/vibes/skills/hooks-builder/SKILL.md:434 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | plugins/vibes/skills/hooks-builder/SKILL.md:34 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | plugins/vibes/skills/hooks-builder/SKILL.md:158 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | plugins/vibes/skills/hooks-builder/SKILL.md:325 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | plugins/vibes/skills/hooks-builder/SKILL.md:342 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | plugins/vibes/skills/hooks-builder/SKILL.md:374 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | plugins/vibes/skills/hooks-builder/SKILL.md:378 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | plugins/vibes/skills/hooks-builder/SKILL.md:382 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | plugins/vibes/skills/hooks-builder/SKILL.md:434 | |
| HIGH | Skill enables direct command injection via hook configuration The skill explicitly teaches users how to configure 'command' type hooks that execute shell commands. While the guide provides warnings and 'SAFE' patterns, it also presents 'UNSAFE' patterns that demonstrate direct command injection vulnerabilities (e.g., unquoted variables, unquoted paths in JSON). A user might inadvertently implement these unsafe patterns, leading to arbitrary command execution if the hook processes untrusted input. The core functionality of the skill is to enable command execution, making this a fundamental risk if not implemented with extreme care. Strongly emphasize the 'SAFE' patterns and consider adding automated linting or validation for hook configurations to prevent common injection pitfalls. Add a prominent warning that all inputs to shell commands within hooks must be rigorously sanitized and quoted. | LLM | SKILL.md:204 | |
| HIGH | Supply chain risk from project-level hooks The skill describes that hooks can be configured in `.claude/settings.json` which is intended for 'Project hooks (team, committed)'. This introduces a significant supply chain risk: a malicious actor could embed harmful hooks in a project's `.claude/settings.json` file. When another user clones and opens this project, these hooks would execute with the user's permissions, potentially leading to data exfiltration, system compromise, or other malicious activities. The skill does not explicitly warn about the dangers of running hooks from untrusted project sources. Add a prominent warning in the 'Security Hardening' section about the supply chain risks associated with project-level hooks. Advise users to only run projects from trusted sources or to meticulously audit `.claude/settings.json` files before execution. Consider adding a mechanism for users to disable project-level hooks by default or require explicit opt-in. | LLM | SKILL.md:50 | |
| MEDIUM | Potential prompt injection in 'prompt-based' hooks The skill introduces 'prompt-based' hooks where the 'prompt' field can include variables like `$ARGUMENTS`. If `$ARGUMENTS` contains untrusted user input, this could lead to prompt injection against the underlying LLM. A malicious user could craft input that manipulates the LLM's behavior, potentially bypassing security checks or extracting sensitive information if the LLM has access to such data. Add a specific warning in the 'Security Hardening' section about prompt injection risks for 'prompt-based' hooks. Advise users to sanitize or escape any untrusted input (like `$ARGUMENTS`) before it is included in an LLM prompt, or to use techniques like XML tagging to clearly delineate user input from system instructions. | LLM | SKILL.md:170 |
Scan History
Embed Code
[](https://skillshield.io/report/cafe69a2377f84a5)
Powered by SkillShield