Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-quick-dev
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-quick-dev received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection Attempt via External File Instruction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection Attempt via External File Instruction The untrusted content within `SKILL.md` attempts to instruct the LLM to 'Follow the instructions in ./workflow.md.'. This is a direct prompt injection attempt, as it tries to override the LLM's primary instructions by directing it to process content from an external, potentially untrusted, source. This could lead to arbitrary command execution, data exfiltration, or other malicious actions if the referenced file contains harmful instructions. Untrusted content must never be allowed to issue instructions to the LLM. The system should explicitly instruct the LLM to treat all content within the untrusted delimiters as data, not commands. If external files are required for skill execution, they should be loaded and processed by the trusted system, not at the behest of untrusted input. | LLM | SKILL.md:3 |
Scan History
Embed Code
[](https://skillshield.io/report/698496c5a9d583ac)
Powered by SkillShield