Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-qa-generate-e2e-tests
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-qa-generate-e2e-tests received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Untrusted Skill Body.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Untrusted Skill Body The primary body of the SKILL.md file, which is explicitly marked as untrusted input, contains a direct instruction to the host LLM: 'Follow the instructions in ./workflow.md.'. This is a clear attempt at prompt injection, aiming to manipulate the LLM's behavior by providing instructions from an untrusted source, potentially leading to unexpected actions or disclosure of information. The primary body of the SKILL.md file should only contain descriptive text for the user or LLM about the skill's purpose, not direct instructions for the LLM to execute. Any operational instructions for the LLM should be defined in trusted configuration files or the skill's manifest, outside of untrusted content delimiters. | LLM | SKILL.md:3 |
Scan History
Embed Code
[](https://skillshield.io/report/71377b3e91033f6d)
Powered by SkillShield