Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-testarch-test-design
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-testarch-test-design received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via instruction redirection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via instruction redirection The `SKILL.md` file, which is treated as untrusted input, contains a direct instruction to the host LLM: `Follow the instructions in [workflow.md](workflow.md).`. This attempts to manipulate the LLM's behavior by injecting new instructions from an untrusted source, potentially leading the LLM to execute arbitrary commands or alter its intended function based on content from `workflow.md`. Remove direct instructions to the LLM from untrusted content. Skill definitions should be declarative, describing the skill's purpose, rather than imperative commands for the LLM to follow. If `workflow.md` contains necessary instructions, these should be integrated into the trusted system prompt or skill definition, not injected via untrusted input. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/8c18de20239bed04)
Powered by SkillShield