Security Audit
cm
github.com/Mrc220/agent_flywheel_clawdbot_skills_and_integrationsTrust Assessment
cm received a trust score of 66/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential Prompt Injection in CM tool inputs, High-privilege operations via `cm guard --install` and `CASS_PATH`, Supply Chain Risk from untrusted `.cass/` content and import commands.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, dependency_graph, static_code_analysis. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 11, 2026 (commit c7bd8e0f). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Prompt Injection in CM tool inputs The skill instructs the agent to use the `cm` tool with various user-controlled string inputs (e.g., task descriptions for `cm context`, rule content for `cm playbook add`, reasons/summaries for `cm mark`/`cm outcome`). The skill explicitly states that LLMs are used internally by the `cm` tool for tasks like reflection, validation, and semantic search. If these user inputs are directly incorporated into LLM prompts within the `cm` tool without robust sanitization or instruction-following defenses, an attacker could inject malicious instructions to manipulate the LLM's behavior. While 'Secret Sanitization' is mentioned, it primarily targets data exfiltration of secrets, not necessarily prompt injection that manipulates LLM instructions. Implement robust prompt injection defenses for all LLM inputs derived from user-controlled strings within the `cm` tool. This includes input validation, instruction-following safeguards, and potentially using LLM-specific prompt engineering techniques to isolate user input from system instructions. | Unknown | SKILL.md:67 | |
| HIGH | High-privilege operations via `cm guard --install` and `CASS_PATH` The `cm guard --install` command installs Claude Code and Git pre-commit hooks, which are high-privilege operations that modify the agent's environment and potentially the user's Git repositories. If the `cm` tool itself were compromised, these hooks could be used to execute arbitrary malicious code. Additionally, the `CASS_PATH` environment variable allows specifying the path to the `cass` binary. If `cm` executes this binary without proper path validation or sanitization of `CASS_PATH`, it could be vulnerable to command injection, allowing an attacker to execute arbitrary commands by manipulating this environment variable. 1. For `cm guard --install`: Ensure strict integrity checks for the installed hooks. Provide clear warnings and require explicit user confirmation for installation. Consider sandboxing the execution environment for hooks if possible. 2. For `CASS_PATH`: Always use the full, validated path to the `cass` binary. Sanitize or strictly validate the `CASS_PATH` environment variable to prevent injection of malicious commands or paths. | Unknown | SKILL.md:169 | |
| MEDIUM | Supply Chain Risk from untrusted `.cass/` content and import commands The skill encourages committing project-level configuration and rules (`.cass/playbook.yaml`, `.cass/traumas.jsonl`, `blocked.yaml`) to a repository. If an agent or user clones a malicious or compromised repository, these files could introduce harmful rules, anti-patterns, or trauma patterns that could disrupt legitimate operations, encourage insecure coding practices, or block essential commands. Similarly, `cm playbook import` and `cm trauma import` allow importing content from external files, which could be untrusted. 1. Advise users to only clone repositories from trusted sources. 2. Implement mechanisms to validate the integrity and safety of imported `.cass` files (e.g., schema validation, content scanning for known malicious patterns beyond just 'doom patterns'). 3. Provide clear warnings when importing content from external sources. | Unknown | SKILL.md:269 |
Scan History
Embed Code
[](https://skillshield.io/report/4f6665034c42d9a0)
Powered by SkillShield