Trust Assessment
pdd received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Arbitrary File Creation/Write via User-Controlled Path, Second-Order Prompt Injection via Generated PROMPT.md.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary File Creation/Write via User-Controlled Path The skill explicitly instructs the creation of directories and files at a path (`project_dir`) that is directly controlled by the user. If the underlying AI agent has broad filesystem write permissions, a malicious user could specify sensitive system directories (e.g., `/etc`, `~/.ssh`, `/var/www`) as the `project_dir` or a subdirectory within them. This could lead to the creation of arbitrary files in sensitive locations, potentially overwriting existing files (though the skill attempts to prevent overwriting *existing project directories*, it doesn't prevent writing *into* existing system directories or creating new ones). Implement strict path validation and sanitization for `project_dir`. Restrict `project_dir` to a specific, isolated sandbox directory (e.g., `~/.claw/projects/`). Ensure the AI agent's runtime environment has minimal necessary filesystem permissions. | LLM | SKILL.md:41 | |
| HIGH | Second-Order Prompt Injection via Generated PROMPT.md The skill explicitly instructs the creation of a `PROMPT.md` file for another AI agent ("Ralph") in Step 9. This `PROMPT.md` is populated with an "Objective statement", "Key requirements", and "Acceptance criteria" which are derived from the user's initial `rough_idea` and subsequent iterative interactions. A malicious user could inject prompt injection instructions into their `rough_idea` or during the requirements clarification phase, which would then be incorporated into the generated `PROMPT.md`. When "Ralph" processes this `PROMPT.md`, it could be manipulated. Implement robust sanitization and filtering of user input before it is incorporated into the generated `PROMPT.md`. Consider using techniques like input validation, escaping special characters, or employing a "sandwich prompt" approach when generating prompts for downstream LLMs to isolate user input. Clearly delineate user-provided content from system instructions within the generated prompt. | LLM | SKILL.md:130 |
Scan History
Embed Code
[](https://skillshield.io/report/a12f1534ef5c6ece)
Powered by SkillShield