Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-edit-prd
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-edit-prd received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Instruction Following.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Instruction Following The `SKILL.md` file contains a direct instruction to the host LLM: 'Follow the instructions in ./workflow.md.'. This is a clear attempt at prompt injection, as it tries to manipulate the LLM's behavior by directing it to process external content as instructions. This could lead to the LLM executing arbitrary commands or changing its operational parameters based on the content of `workflow.md`, which is not provided and thus its contents are unknown and potentially malicious. Remove direct instructions to the LLM from untrusted skill definitions. Skill definitions should describe the skill's purpose and how to use it, not issue commands to the LLM. If external files are needed, they should be explicitly loaded and processed by a trusted interpreter, not implicitly followed by the LLM. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/849d93aa3f74c0a9)
Powered by SkillShield