Security Audit
bv
github.com/Mrc220/agent_flywheel_clawdbot_skills_and_integrationsTrust Assessment
bv received a trust score of 83/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via unsanitized filename in `bv --export-graph`, Potential Prompt Injection via `beads.jsonl` content in `bv` output.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, dependency_graph, static_code_analysis. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 11, 2026 (commit c7bd8e0f). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via unsanitized filename in `bv --export-graph` The `bv --export-graph <file.html>` command allows specifying an output filename. If an AI agent constructs this command using untrusted user input for `<file.html>` without proper sanitization, a malicious user could inject shell commands (e.g., `bv --export-graph "malicious.html; rm -rf /"`). This could lead to arbitrary command execution on the agent's host. AI agents should strictly sanitize or validate any user-provided input for filenames passed to `bv --export-graph`. Only allow alphanumeric characters, hyphens, underscores, and a single dot for the extension. Alternatively, use a fixed, agent-controlled temporary directory and filename. | Unknown | SKILL.md:172 | |
| MEDIUM | Potential Prompt Injection via `beads.jsonl` content in `bv` output The `bv` tool processes user-controlled content from `.beads/beads.jsonl` (e.g., bead titles or descriptions). This content can appear in the `reason` field of `bv --robot-triage` output (e.g., in `recommendations`). If malicious instructions are embedded within the `beads.jsonl` data, and an AI agent then consumes and presents this `reason` field directly to its host LLM without sanitization, it could lead to prompt injection, manipulating the LLM's behavior. AI agents consuming `bv` output should sanitize or filter any text fields (like `reason`) before presenting them to the host LLM. The `bv` tool itself could also offer an option to strip or escape potentially malicious LLM instructions from its text outputs. | Unknown | SKILL.md:192 |
Scan History
Embed Code
[](https://skillshield.io/report/3523aadb2d42b385)
Powered by SkillShield