Security Audit
security-threat-model
github.com/davila7/claude-code-templatesTrust Assessment
security-threat-model received a trust score of 57/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 0 high, 2 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Covert behavior / concealment directives, Prompt Injection via external prompt template.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via external prompt template The skill explicitly instructs the LLM to 'Use prompts in `references/prompt-template.md` to generate a repository summary' and to 'Follow the required output contract in `references/prompt-template.md`. Use it verbatim when possible.' This means the LLM will interpret and execute instructions found within `references/prompt-template.md`. If this external file is compromised or contains malicious instructions, it could lead to arbitrary prompt injection, allowing an attacker to manipulate the LLM's behavior, exfiltrate data, or perform other unauthorized actions. Avoid loading and executing arbitrary instructions from external files. If external templates are necessary, ensure they are strictly data templates (e.g., JSON, YAML) and not interpreted as executable prompts. Implement strict validation and sanitization of content loaded from external sources. Consider sandboxing the execution environment for such operations. | LLM | SKILL.md:11 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| MEDIUM | File write capability with potential for content manipulation The skill instructs the LLM to 'Write the final Markdown to a file named `<repo-or-dir-name>-threat-model.md`'. While the instruction specifies deriving the filename from the repo root or in-scope directory, which mitigates direct path traversal, the underlying capability to write files to the filesystem is present. If the LLM is compromised via prompt injection (as identified in SS-LLM-001), it could be instructed to write malicious content or sensitive data to this file, leading to data exfiltration or integrity compromise. The file write operation itself represents an elevated permission. Implement strict sanitization and validation for any part of a file path derived from untrusted or semi-trusted input. Ensure the file writing mechanism is sandboxed to prevent writes outside the intended output directory. The content written should also be strictly validated to prevent embedding malicious scripts or sensitive data, especially if the LLM's output is not fully trusted. | LLM | SKILL.md:81 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/8d884e21d553be98)
Powered by SkillShield