Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-testarch-nfr
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-testarch-nfr received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via external instruction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via external instruction The skill attempts to inject instructions into the host LLM by directing it to 'Follow the instructions in [workflow.md]'. This allows untrusted content from `workflow.md` to potentially manipulate the LLM's behavior, override its instructions, or extract information, bypassing the security analysis of `workflow.md` itself. Remove direct instructions to the LLM within untrusted skill content. If external files are necessary for skill execution, they should be processed by the skill's code, not directly by the LLM. The LLM should only be given instructions that are part of its trusted system prompt or manifest. | LLM | SKILL.md:3 |
Scan History
Embed Code
[](https://skillshield.io/report/54109bb7142b242e)
Powered by SkillShield