Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-correct-course
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-correct-course received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via external instruction reference.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via external instruction reference The untrusted `SKILL.md` attempts to instruct the LLM to 'Follow the instructions in ./workflow.md.'. This is a prompt injection attempt, trying to make the LLM execute instructions from an external, potentially malicious, file. If the LLM were to follow this instruction, it could lead to arbitrary code execution, data exfiltration, or other harmful actions by loading and executing content from an untrusted source. Remove the instruction to follow external files from the skill's primary definition. All instructions for the LLM should be explicitly defined within the trusted skill manifest or primary skill file, not referenced externally from untrusted content. The LLM should never be instructed to load or execute instructions from arbitrary files within untrusted skill content. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/8c97149e97a0a434)
Powered by SkillShield