Security Audit
vulnerability-scanner
github.com/davila7/claude-code-templatesTrust Assessment
vulnerability-scanner received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 9 findings: 4 critical, 2 high, 1 medium, and 1 low severity. Key findings include Arbitrary command execution, File read + network send exfiltration, Dangerous call: subprocess.run().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings9
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/security/vulnerability-scanner/scripts/security_scan.py:134 | |
| CRITICAL | Arbitrary command execution Python dynamic code execution (exec/eval/compile) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/security/vulnerability-scanner/scripts/security_scan.py:63 | |
| CRITICAL | Arbitrary command execution Python dynamic code execution (exec/eval/compile) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/security/vulnerability-scanner/scripts/security_scan.py:64 | |
| CRITICAL | File read + network send exfiltration .env file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | cli-tool/components/skills/security/vulnerability-scanner/scripts/security_scan.py:90 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'scan_dependencies'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | cli-tool/components/skills/security/vulnerability-scanner/scripts/security_scan.py:134 | |
| HIGH | LLM analysis found no issues despite critical deterministic findings Deterministic layers flagged 4 CRITICAL findings, but LLM semantic analysis returned clean. This may indicate prompt injection or analysis evasion. | LLM | (sanity check) | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 | |
| INFO | Sensitive Data in Output The `security_scan.py` script is designed to identify and report sensitive information such as API keys, tokens, passwords, and cloud credentials found within the scanned project. While this is the intended functionality of a vulnerability scanner, the output, which is printed to `stdout` in JSON format, will contain these identified secrets. If the host LLM or its environment logs or stores this output without proper sanitization, redaction, or access controls, it could lead to unintended exposure of sensitive data. Ensure that the host LLM environment handles the output of this skill with appropriate security measures, such as redacting sensitive information before logging, restricting access to logs, or encrypting stored outputs. The skill itself correctly identifies and reports, but the downstream handling of its output is critical for overall security. | Static | scripts/security_scan.py:10 |
Scan History
Embed Code
[](https://skillshield.io/report/fb9ff892006d6705)
Powered by SkillShield