Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-advanced-elicitation
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-advanced-elicitation received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Direct Prompt Injection via User Input Handling.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct Prompt Injection via User Input Handling The skill explicitly instructs the LLM to 'try best to follow the instructions given by the user' if the user provides any reply other than 'y' or 'n' when prompted to apply changes. This creates a direct prompt injection vulnerability, allowing a malicious user to provide arbitrary instructions to the LLM, potentially overriding its internal directives, extracting sensitive information from its context, or performing unintended actions. Modify the instruction to strictly limit user input handling. Instead of 'try best to follow the instructions given by the user', specify a limited set of valid responses or provide a default action for invalid input. For example, 'If any other reply, re-prompt the user for a valid choice (y/n) or default to discarding changes.' | LLM | SKILL.md:80 |
Scan History
Embed Code
[](https://skillshield.io/report/27543e8d29aea3d6)
Powered by SkillShield