Trust Assessment
related-skill received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Broad Bash permission for 'npx skills', Potential Command Injection via 'npx skills' arguments, Unpinned 'npx skills' dependency.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Broad Bash permission for 'npx skills' The skill declares 'Bash(npx skills *)' as an allowed tool. This permission grants the skill the ability to execute the 'npx skills' command with any arbitrary arguments. This is an overly broad permission that significantly increases the attack surface. An attacker could potentially craft malicious arguments via prompt injection to the LLM, leading to unintended actions or command injection if the 'npx skills' tool or the LLM's command construction is vulnerable. Restrict the 'Bash' permission to specific 'npx skills' subcommands and argument patterns. For example, if only 'add', 'search', 'list', 'remove', and 'update' are needed, specify 'Bash(npx skills add|search|list|remove|update *)'. Consider using a more constrained tool definition if 'npx skills' offers a programmatic API or if specific arguments can be whitelisted. | LLM | Manifest | |
| HIGH | Potential Command Injection via 'npx skills' arguments The skill is granted 'Bash(npx skills *)' permission, allowing it to execute 'npx skills' with arbitrary arguments. If the LLM constructs these arguments based on untrusted user input without proper sanitization, an attacker could inject shell metacharacters (e.g., ';', '&&', '|', '`') to execute arbitrary commands on the host system. For example, if a user requests to install a skill named 'foo; rm -rf /', and the LLM constructs 'npx skills add inference-sh/agent-skills@foo; rm -rf /', this could lead to critical system compromise. The 'SKILL.md' provides examples of 'npx skills' usage, highlighting the commands that would be vulnerable if arguments are not sanitized. 1. Implement strict input validation and sanitization for any user-provided input before incorporating it into 'npx skills' commands. 2. Constrain the 'Bash' permission to specific subcommands and argument patterns. 3. If 'npx skills' has a programmatic API or a safer way to pass arguments that prevents shell injection, use that instead of direct shell execution. 4. Ensure all user-provided arguments are properly shell-escaped if direct shell execution is unavoidable. | LLM | Manifest | |
| MEDIUM | Unpinned 'npx skills' dependency The skill relies on the 'npx skills' command, which executes the 'skills' npm package. The provided context does not specify how this package is installed or its version pinned. Relying on an unpinned or globally installed 'npx' package introduces a supply chain risk, as a compromised or malicious version of the 'skills' package could be executed, potentially leading to system compromise. This risk is amplified by the broad 'Bash(npx skills *)' permission. 1. Ensure that the 'skills' npm package is installed with a specific, pinned version (e.g., via 'package.json' and 'package-lock.json' or a similar mechanism). 2. Implement integrity checks (e.g., 'npm ci --integrity') to verify the authenticity of installed packages. 3. Run skills in isolated environments where dependencies are strictly controlled and audited. | LLM | SKILL.md:10 |
Scan History
Embed Code
[](https://skillshield.io/report/31c0a9bdd2aa0839)
Powered by SkillShield