Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-create-ux-design
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-create-ux-design received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via External File Instruction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via External File Instruction The skill's primary markdown file, which is treated as untrusted input, contains an instruction for the host LLM to 'Follow the instructions in ./workflow.md'. This is a direct prompt injection attempt, instructing the LLM to deviate from its intended behavior and execute commands or read content from an external file. This could lead to arbitrary command execution, data exfiltration, or further manipulation of the LLM's actions if the `workflow.md` file contains malicious instructions. Remove any instructions or commands intended for the host LLM from untrusted input sections. Skill behavior should be defined by trusted configuration or code, not by content that can be manipulated by an attacker. If `workflow.md` is a legitimate part of the skill's operation, its invocation should be handled by trusted code, not by a direct instruction within untrusted markdown. | LLM | SKILL.md:3 |
Scan History
Embed Code
[](https://skillshield.io/report/636d0bb575a6156e)
Powered by SkillShield