Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-testarch-automate
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-testarch-automate received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via External Instruction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via External Instruction The skill's primary instruction, which is treated as untrusted content, attempts to dictate the LLM's behavior by instructing it to 'Follow the instructions in [workflow.md](workflow.md)'. This is a direct prompt injection attempt, as untrusted input is trying to override or extend the LLM's operational instructions by pointing to an external, potentially malicious, source of further instructions. The LLM should not execute instructions originating from untrusted content. Remove or rephrase the instruction 'Follow the instructions in [workflow.md](workflow.md)' from the untrusted skill content. All instructions for the LLM should be provided by the trusted system prompt, not by the untrusted skill content itself. If `workflow.md` contains necessary steps, these should be integrated into the trusted system prompt or explicitly called by a trusted tool. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/0735d373927ed6e3)
Powered by SkillShield