Security Audit
requirements-clarity
github.com/davila7/claude-code-templatesTrust Assessment
requirements-clarity received a trust score of 64/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 1 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Covert behavior / concealment directives, Potential Path Traversal in PRD File Generation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Path Traversal in PRD File Generation The skill instructs the LLM to write a Product Requirements Document (PRD) to a file path constructed using user-influenced components: `./docs/prds/{feature_name}-v{version}-prd.md`. The `{feature_name}` is derived from the user's initial requirement description, and the `{version}` can be user-specified. If the LLM's sanitization of these user-controlled inputs is insufficient (e.g., failing to remove path traversal sequences like `../` or absolute path indicators), an attacker could craft a malicious input to write files to arbitrary locations on the file system, potentially leading to data corruption, denial of service, or even remote code execution if sensitive files are overwritten. Implement robust sanitization for `{feature_name}` and `{version}` to strictly prevent path traversal sequences (e.g., `../`, `/`) and absolute paths. Ensure the `Write` tool is constrained to a specific, non-sensitive directory and cannot overwrite critical system files. For example, enforce strict alphanumeric and hyphen characters for `feature_name` and numeric/dot format for `version`. | LLM | SKILL.md:100 | |
| HIGH | Prompt Injection Risk via Generated PRD Content The skill's core functionality involves gathering user input through interactive clarification and synthesizing it into a comprehensive Product Requirements Document (PRD). This PRD is then saved to a file. If this generated PRD file is subsequently read and processed by another LLM or an automated system, a malicious user could embed prompt injection instructions within their initial requirement description or subsequent clarifications. These embedded instructions, once part of the PRD, could then manipulate the behavior of the downstream system, leading to unintended actions, data leakage, or system compromise. Implement strict input validation and sanitization for all user-provided text that will be incorporated into the PRD, especially if the PRD is intended for consumption by other automated systems or LLMs. Consider adding a 'human review' step before the PRD is finalized and used by other systems. If the PRD is consumed by another LLM, ensure that LLM is robustly defended against prompt injection, for example, by using a separate, sandboxed LLM for processing untrusted inputs. | LLM | SKILL.md:160 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/660d7bb96c28fa4f)
Powered by SkillShield