Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-cis-problem-solving
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-cis-problem-solving received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Untrusted Instruction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Untrusted Instruction The skill's primary documentation (SKILL.md) contains a direct instruction to the host LLM to 'Follow the instructions in [workflow.md]'. This is an attempt by untrusted content to manipulate the LLM's behavior and direct its actions, which is a form of prompt injection. The LLM should analyze the skill, not execute instructions found within it. Remove any direct instructions to the LLM from untrusted content. The LLM's behavior should be governed by its system prompt and trusted instructions, not by content within the skill package itself. If `workflow.md` contains necessary information for the skill, it should be presented as data for the LLM to process, not as a command to follow. | LLM | SKILL.md:3 |
Scan History
Embed Code
[](https://skillshield.io/report/294e43982d58443e)
Powered by SkillShield