Security Audit
ru
github.com/Mrc220/agent_flywheel_clawdbot_skills_and_integrationsTrust Assessment
ru received a trust score of 26/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 1 critical, 2 high, 2 medium, and 0 low severity. Key findings include Arbitrary command execution via per-repo hooks, Arbitrary command execution via repository-defined quality gates, Unsecured installation via `curl | bash`.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, static_code_analysis, dependency_graph. The static_code_analysis layer scored lowest at 26/100, indicating areas for improvement.
Last analyzed on February 11, 2026 (commit c7bd8e0f). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution via per-repo hooks The `ru agent-sweep` command explicitly executes `pre_hook` and `post_hook` commands defined within a repository's `.ru-agent.yml` file. If an attacker can commit a malicious `.ru-agent.yml` to a repository, the `ru` agent will automatically execute arbitrary commands on the system where `ru` is running. This is a direct command injection vulnerability, as the content of these hooks is entirely controlled by the repository owner/contributor. 1. **Restrict execution**: Do not allow arbitrary shell commands in `pre_hook` and `post_hook`. Instead, define a limited set of allowed actions or use a strictly sandboxed environment for execution. 2. **User confirmation**: For any repository with `pre_hook` or `post_hook` defined, require explicit user confirmation before execution, especially if the `.ru-agent.yml` file has changed. 3. **Integrity checks**: Implement cryptographic signing or other integrity checks for `.ru-agent.yml` files to ensure they haven't been tampered with. 4. **Least privilege**: Run `ru` in an environment with the fewest possible privileges. | Unknown | SKILL.md:200 | |
| HIGH | Arbitrary command execution via repository-defined quality gates The `ru agent-sweep` and `ru review` commands automatically execute quality gate commands (e.g., `npm test`, `make test`, `pytest`, `shellcheck`) based on the detected project type within a repository. If a malicious actor controls the content of a repository (e.g., `package.json`, `Makefile`, `pyproject.toml`), they can inject arbitrary commands into these build/test scripts. Since `ru` automatically processes 'dirty repos' and runs these quality gates, this leads to automated execution of potentially malicious code from untrusted repositories. 1. **Sandboxing**: Execute quality gate commands within a strictly isolated and sandboxed environment (e.g., Docker container with minimal privileges, gVisor, firejail) that prevents access to host resources. 2. **User confirmation**: Require explicit user confirmation before running quality gates on newly added or untrusted repositories. 3. **Allowlist**: Consider an allowlist approach for quality gate commands, only allowing specific, known-safe commands and arguments. 4. **Integrity checks**: Implement integrity checks for build/test configuration files (e.g., `package.json`, `Makefile`) to detect tampering. | Unknown | SKILL.md:250 | |
| HIGH | Unsecured installation via `curl | bash` The recommended installation method involves piping a script directly from a remote GitHub URL to `bash` (`curl -fsSL ... | bash`). This practice introduces a significant supply chain risk. If the GitHub repository or the specific `install.sh` file is compromised, an attacker could inject malicious code into the installation script, leading to arbitrary code execution on the user's system during installation. There is no integrity check (e.g., checksum verification) before execution. 1. **Provide checksums**: Offer a method to verify the integrity of the `install.sh` script (e.g., SHA256 checksum) that users can manually check before execution. 2. **Signed packages**: Distribute the tool as signed packages (e.g., `.deb`, `.rpm`, Homebrew formula) through trusted package managers. 3. **Review before execution**: Advise users to review the script content before piping it to `bash`. 4. **Alternative installation**: Provide a more secure, multi-step installation process that allows for review and verification. | Unknown | SKILL.md:300 | |
| MEDIUM | Potential command injection via unsanitized user-provided repository identifiers The `ru` tool accepts user-provided repository identifiers (e.g., `owner/repo`, `https://github.com/owner/repo`, `owner/repo@branch as custom-name`) for commands like `ru add` and in configuration files. While the skill mentions 'Path security validation prevents traversal attacks' and 'No Global `cd`', it does not explicitly detail how *all* user-provided strings are sanitized before being used in shell commands (e.g., `git clone $URL`, `git -C $PATH`). If these inputs are not thoroughly escaped or validated, a malicious string (e.g., `owner/repo; malicious_command`) could lead to command injection. 1. **Strict input validation**: Implement rigorous validation for all user-provided repository identifiers, ensuring they conform to expected patterns (e.g., GitHub username/repo name regex, valid URL format). 2. **Quoting and escaping**: Always quote and properly escape user-provided variables when incorporating them into shell commands to prevent them from being interpreted as separate commands or arguments. 3. **Use `printf %q`**: In Bash, use `printf %q` to safely quote variables for use in shell commands. 4. **Avoid `eval`**: If possible, avoid using `eval` with user-controlled input. | Unknown | SKILL.md:150 | |
| MEDIUM | Potential data exfiltration by AI agent processing sensitive uncommitted changes The `ru agent-sweep` command orchestrates AI coding agents (Claude Code) to 'analyze uncommitted changes' and 'generate structured commit messages.' While a file denylist is in place to prevent *committing* secrets, the AI agent itself processes the content of these uncommitted changes, which may include sensitive information (e.g., API keys, PII, proprietary code) present in the working directory. There is no explicit mention of how the AI agent's processing environment or its output channels are secured against exfiltrating this sensitive data, either intentionally (via a malicious prompt) or unintentionally (via verbose logging or 'structured commit messages' that include snippets of sensitive data). 1. **Strict data sanitization for AI input**: Before sending uncommitted changes to the AI, implement a more aggressive sanitization or redaction process for sensitive patterns beyond just the file denylist. 2. **AI output validation**: Implement strict validation and sanitization of all AI-generated output (e.g., commit messages, proposed code changes) to ensure no sensitive data is inadvertently included. 3. **Confidentiality agreements/controls**: Ensure the AI model provider has strong confidentiality agreements and technical controls in place to prevent data leakage from their side. 4. **Local/private AI models**: Consider using local or privately hosted AI models for processing highly sensitive data, or models specifically designed for secure code analysis. 5. **Sandbox AI execution**: If possible, run the AI agent's processing in a highly restricted sandbox that prevents any network egress or file system access beyond what is strictly necessary for its function. | Unknown | SKILL.md:180 |
Scan History
Embed Code
[](https://skillshield.io/report/a6e69b9d72b13063)
Powered by SkillShield