Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-create-product-brief
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-create-product-brief received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Untrusted Instruction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Untrusted Instruction The `SKILL.md` file contains a direct instruction to the host LLM (`Follow the instructions in ./workflow.md.`) embedded within the untrusted input delimiters. This is a prompt injection attempt, as it tries to manipulate the LLM's behavior by instructing it to process content from another file, bypassing the security boundary that designates content within these tags as untrusted data, not executable commands or instructions for the LLM. Remove instructions intended for the host LLM from within untrusted input delimiters. Content within these tags should be treated as data or user input, not as commands for the LLM. If `workflow.md` is part of the skill's intended logic, its execution or interpretation should be handled through a trusted, explicit mechanism, not via an embedded instruction in untrusted content. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/807c0e1350398114)
Powered by SkillShield