Security Audit
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-init
github.com/PabloLION/bmad-pluginTrust Assessment
PabloLION/bmad-plugin:plugins/bmad/skills/bmad-init received a trust score of 56/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 3 high, 0 medium, and 1 low severity. Key findings include Unsafe deserialization / dynamic eval, Path Traversal via User-Controlled Module Code and Skill Path, Arbitrary File Write and Directory Creation via Unsanitized Paths.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 68/100, indicating areas for improvement.
Last analyzed on April 11, 2026 (commit 17efb6ce). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | plugins/bmad/skills/bmad-init/scripts/bmad_init.py:244 | |
| HIGH | Path Traversal via User-Controlled Module Code and Skill Path The `bmad_init.py` script constructs file paths using user-controlled arguments such as `--module` (module_code) and `--skill-path`. These arguments are not sanitized for path traversal sequences (e.g., `../`). An attacker could provide a malicious `module_code` or `skill_path` to read or write files outside the intended project or skill directories. Specifically, `find_target_module_yaml` and `load_module_config` are vulnerable to arbitrary file reads, and `write_answers_to_config` and `create_directories` are vulnerable to arbitrary file writes/directory creations. Sanitize user-provided `module_code` and `skill_path` arguments to prevent path traversal. This can be done by validating that these inputs do not contain path separators (`/`, `\`) or parent directory references (`..`). Alternatively, after constructing the full path, resolve it to its absolute form and verify that it remains within the expected base directory using `pathlib.Path.resolve()` and `pathlib.Path.is_relative_to()` or by checking if the resolved path starts with the base directory. | LLM | scripts/bmad_init.py:107 | |
| HIGH | Arbitrary File Write and Directory Creation via Unsanitized Paths The `write_answers_to_config` function uses the user-controlled `module_code` to construct the directory path for `config.yaml` without sanitization. Similarly, `create_directories` uses directory templates from `module.yaml` (which can be loaded via user-controlled paths) and user-controlled answers to create directories. If `module_code` or the directory templates contain path traversal sequences (e.g., `../`) or absolute paths, an attacker could write configuration files or create arbitrary directories in unintended locations on the filesystem. Before creating directories or writing files, ensure that the final resolved path is strictly contained within the `project_root` or other designated safe areas. Implement strict validation for `module_code` to prevent path traversal. For directory templates, ensure they are relative paths and resolve them against a trusted base directory, then verify the final path's safety. | LLM | scripts/bmad_init.py:269 | |
| LOW | Unpinned Python Dependencies The `requires-python` and `dependencies` declarations in `scripts/bmad_init.py` specify `pyyaml` without a version pin. This could lead to supply chain risks if a future version of `pyyaml` introduces vulnerabilities or breaking changes, or if a malicious package with the same name is published. Pin the version of `pyyaml` to a specific, known-good version (e.g., `pyyaml==6.0.1`) to ensure deterministic builds and mitigate risks from future malicious or vulnerable package updates. | LLM | scripts/bmad_init.py:2 |
Scan History
Embed Code
[](https://skillshield.io/report/6ba74de7186e4d2f)
Powered by SkillShield