Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-distillator
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-distillator received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User-Provided File Paths, Broad File System Read Access Poses Data Exfiltration Risk.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via User-Provided File Paths The `SKILL.md` instructs the LLM to execute `scripts/analyze_sources.py` with user-provided `source_documents`. These documents can include file paths, folder paths, or glob patterns. If the LLM constructs a shell command by directly interpolating these user inputs without proper sanitization or escaping, a malicious user could inject arbitrary shell commands (e.g., `file.md; rm -rf /`) leading to arbitrary code execution on the host system. Instruct the LLM to sanitize all user-provided `source_documents` arguments before constructing the command string for `scripts/analyze_sources.py`. A safer approach is to use a programmatic execution method (e.g., `subprocess.run` in Python with `shell=False` and arguments passed as a list) that prevents shell interpretation of arguments. | LLM | SKILL.md:41 | |
| HIGH | Broad File System Read Access Poses Data Exfiltration Risk The skill's core functionality, as described in `SKILL.md` and implemented in `scripts/analyze_sources.py`, requires reading the full content of user-specified `source_documents`. These inputs can be arbitrary file paths, including potentially sensitive system files (e.g., `/etc/passwd`). While necessary for the skill's stated purpose of distillation, this broad read access, especially when combined with the 'graceful degradation' mode where the main LLM directly processes file content, creates a significant data exfiltration risk if the agent is compromised (e.g., via prompt injection) or prompted to output sensitive information from these files. Implement strict sandboxing for the skill's execution environment to limit file system access to only necessary directories. Implement robust output filtering and content moderation to prevent the agent from inadvertently or maliciously exfiltrating sensitive data read from user-provided files. Ensure that the LLM's responses are strictly confined to the intended distillate format and do not include raw sensitive content unless explicitly and safely designed. | LLM | SKILL.md:26 |
Scan History
Embed Code
[](https://skillshield.io/report/ff7f1ccc19766d75)
Powered by SkillShield