Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-sprint-status
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-sprint-status received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via instruction to read external file.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via instruction to read external file The skill's primary markdown file contains a direct instruction to the host LLM within the untrusted content block. The instruction 'Follow the instructions in ./workflow.md.' attempts to manipulate the LLM's behavior by directing it to process content from another file. This is a form of prompt injection, as the untrusted content is attempting to issue commands to the LLM, which could lead to unintended actions or further malicious instructions if the referenced file is also untrusted or compromised. Remove or sanitize any direct instructions to the LLM from untrusted content. The skill's intended workflow should be explicitly defined within the trusted skill definition or through trusted tool calls, rather than instructing the LLM to read external files from untrusted input. | LLM | SKILL.md:3 |
Scan History
Embed Code
[](https://skillshield.io/report/e89fee13a22b1ce0)
Powered by SkillShield