Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-dev-story
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-dev-story received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection Attempt via Untrusted Skill Description.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection Attempt via Untrusted Skill Description The `SKILL.md` file, which is treated as untrusted input, contains an instruction intended for the host LLM: `Follow the instructions in ./workflow.md.`. This is a direct attempt to manipulate the LLM's behavior by injecting a command from untrusted content. The LLM should not execute instructions found within untrusted data, as this can lead to arbitrary command execution or data exfiltration if the referenced file contains malicious instructions. Untrusted content, such as skill descriptions or user-provided input, must never contain instructions for the host LLM. If `workflow.md` is a necessary part of the skill's operation, its execution should be explicitly handled by the skill's trusted code, not by an instruction embedded in the untrusted `SKILL.md` file. Remove any LLM-directed instructions from untrusted content. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/f0eac4d088c56fac)
Powered by SkillShield