Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-create-prd
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-create-prd received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Instruction Redirection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Instruction Redirection The primary skill instruction in `SKILL.md` attempts to redirect the host LLM to 'Follow the instructions in ./workflow.md'. This is a direct prompt injection attempt, where untrusted content tries to dictate the LLM's behavior by instructing it to read and execute content from another file, bypassing the intended skill invocation mechanism. This could lead to arbitrary instruction execution if `workflow.md` contains malicious commands or further injection attempts. Remove direct instructions to the LLM from untrusted content. If `workflow.md` is intended to be part of the skill's logic, it must be invoked through a defined and sandboxed tool or API call, not via direct LLM instruction. The skill's functionality should be encapsulated within explicit tool definitions. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/cb302b11f148444f)
Powered by SkillShield