Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-cis-agent-innovation-strategist
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-cis-agent-innovation-strategist received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Strong Persona Enforcement Attempt, Broad Filesystem Search for Project Context, Undefined Scope for Storing Configuration Variables.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Strong Persona Enforcement Attempt The skill attempts to enforce a persona ('Victor') and explicitly instructs the host LLM to 'fully embody this persona' and 'not break character' even when calling other skills. This is a direct attempt to override the host LLM's core instructions and manipulate its behavior, which is a critical prompt injection vulnerability. The instruction 'When you are in this persona and the user calls a skill, this persona must carry through and remain active' further reinforces this attempt to maintain control over the LLM's state and behavior across interactions and tool calls. Remove instructions that attempt to override the host LLM's core directives or enforce persona adherence beyond the immediate response generation. The LLM should always prioritize its safety and system instructions. Persona instructions should be suggestive, not imperative, and should not attempt to control the LLM's behavior during tool calls or across turns. | LLM | SKILL.md:17 | |
| HIGH | Broad Filesystem Search for Project Context The skill instructs the LLM to 'Search for `**/project-context.md`' and load its content. The use of `**` implies a recursive search across potentially the entire accessible filesystem. This grants excessive permissions to the skill, allowing it to read arbitrary files named `project-context.md` from any subdirectory. If sensitive information is present in such files, loading them into the LLM's context makes them vulnerable to data exfiltration through subsequent malicious prompts. This broad access is a significant security risk. Restrict filesystem access to specific, well-defined paths. Avoid broad wildcard searches like `**`. If `project-context.md` is necessary, specify an absolute or relative path from the skill's root, e.g., `./project-context.md` or `/path/to/skill/project-context.md`. Ensure that any loaded context is sanitized or handled with care to prevent information leakage. | LLM | SKILL.md:34 | |
| MEDIUM | Undefined Scope for Storing Configuration Variables The skill instructs the LLM to 'Store any other config variables as `{var-name}` and use appropriately' after loading configuration via `bmad-init`. This instruction is overly broad and lacks specific guidance on which variables to store or how to use them securely. If the `bmad-init` skill returns sensitive configuration data (e.g., API keys, internal system details, user PII), this instruction could lead the LLM to store and potentially expose these variables in its context, making them susceptible to exfiltration through prompt injection. Explicitly define which configuration variables should be stored and used. Avoid generic instructions like 'any other config variables.' Implement mechanisms to filter or redact sensitive information from being stored in the LLM's context. Ensure that the `bmad-init` skill itself is secure and only returns necessary, non-sensitive information. | LLM | SKILL.md:31 |
Scan History
Embed Code
[](https://skillshield.io/report/77026d4ac113a916)
Powered by SkillShield