Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-agent-architect
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-agent-architect received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 0 high, 1 medium, and 0 low severity. Key findings include Broad filesystem access and data ingestion via `**/project-context.md`, Unrestricted storage of variables from `bmad-init` skill.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Broad filesystem access and data ingestion via `**/project-context.md` The skill instructs the agent to search the entire filesystem (`**`) for `project-context.md` and load its content as a 'foundational reference.' This grants excessive read permissions and poses a critical data exfiltration risk. Any sensitive information within `project-context.md` (e.g., internal URLs, API endpoints, project secrets, architectural diagrams with sensitive details) could be ingested into the LLM's context and potentially exposed to the user or other systems. Restrict file loading to a specific, known, and secure directory (e.g., `.` or a designated config directory). Implement strict content filtering or sanitization if loading external files is absolutely necessary. Avoid loading arbitrary files from broad filesystem searches. Ensure `project-context.md` does not contain sensitive information if it must be loaded. | LLM | SKILL.md:40 | |
| MEDIUM | Unrestricted storage of variables from `bmad-init` skill The skill instructs the agent to 'Store all returned vars' from the `bmad-init` skill. Without explicit filtering or sanitization, this could lead to sensitive configuration variables (e.g., API keys, internal system details, user PII) being stored in the LLM's context. While the `bmad-init` skill itself is external and its implementation is not provided, the instruction to store *all* variables creates a potential data exfiltration vector if `bmad-init` returns sensitive data. Modify the instruction to explicitly list and filter only the necessary variables from `bmad-init` (e.g., `{user_name}`, `{communication_language}`). Avoid storing sensitive information in the LLM's context. Ensure the `bmad-init` skill itself is secure and returns only non-sensitive data or data explicitly intended for the LLM's context. | LLM | SKILL.md:34 |
Scan History
Embed Code
[](https://skillshield.io/report/5a5923a83d3f7957)
Powered by SkillShield