Security Audit
PabloLION/bmad-plugin:plugins/bmad/_shared/tasks/bmad-create-prd
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/_shared/tasks/bmad-create-prd received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection Attempt in SKILL.md.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection Attempt in SKILL.md The `SKILL.md` file, which is treated as untrusted input, contains an instruction (`Follow the instructions in ./workflow.md.`) attempting to manipulate the host LLM's behavior. This is a direct prompt injection, as untrusted content is trying to issue commands to the LLM, violating the principle that untrusted content should not contain instructions. Remove any instructional language from the `SKILL.md` file. The `SKILL.md` should describe the skill's purpose and usage, not contain commands or instructions for the LLM to execute. Ensure that only trusted, pre-defined instructions guide the LLM's behavior. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/c8df66857f4122db)
Powered by SkillShield