Trust Assessment
ralph received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 21 findings: 12 critical, 7 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, File read + network send exfiltration, Dangerous call: subprocess.run().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings21
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/snail3d/clawd/ralph-skill/scripts/monitor_build.py:24 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/snail3d/clawd/ralph-skill/scripts/monitor_build.py:38 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/snail3d/clawd/ralph-skill/scripts/run_ralph_loop.py:78 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/snail3d/clawd/ralph-skill/scripts/run_ralph_loop.py:97 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/snail3d/clawd/ralph-skill/scripts/run_ralph_loop.py:98 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/snail3d/clawd/ralph-skill/scripts/run_ralph_loop.py:157 | |
| CRITICAL | File read + network send exfiltration .env file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/snail3d/clawd/ralph-skill/scripts/init_prd.py:122 | |
| CRITICAL | File read + network send exfiltration .env file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/snail3d/clawd/ralph-skill/scripts/init_prd.py:136 | |
| CRITICAL | File read + network send exfiltration .env file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/snail3d/clawd/ralph-skill/scripts/init_prd.py:152 | |
| CRITICAL | File read + network send exfiltration .env file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/snail3d/clawd/ralph-skill/scripts/init_prd.py:164 | |
| CRITICAL | Arbitrary command execution via 'test_command' in config The 'scripts/run_ralph_loop.py' script executes the 'test_command' value loaded from 'ralph.config.json' using 'subprocess.run(..., shell=True)'. If an attacker can modify 'ralph.config.json' (e.g., by providing a malicious PRD or directly placing a malicious config file), they can inject and execute arbitrary shell commands on the host system. The 'shell=True' argument makes this a direct command injection vulnerability. Avoid 'shell=True' when executing commands with user-controlled input. Instead, pass commands and arguments as a list (e.g., 'subprocess.run([cmd, arg1, arg2], ...)') and ensure 'test_command' is parsed into a safe list of arguments. If 'shell=True' is strictly necessary, implement robust input validation and sanitization for 'test_command'. | LLM | scripts/run_ralph_loop.py:79 | |
| CRITICAL | Default configuration enables '--dangerously-skip-permissions' for Claude Code The 'scripts/init_prd.py' script creates 'ralph.config.json' with 'claude_code_flags: ["--dangerously-skip-permissions"]' by default. This flag, if used when invoking Claude Code, grants the LLM broad, unconfirmed permissions to execute actions, bypassing critical security prompts. While 'SKILL.md' later advises against its use, the default configuration enables it, creating a significant security risk by allowing the LLM to perform actions without explicit user approval. Remove '--dangerously-skip-permissions' from 'CONFIG_TEMPLATE'. The user should explicitly opt-in to such a dangerous flag, or it should not be used at all. The skill's own advice to "Don't use '--dangerously-skip-permissions'" should be reflected in the default configuration. | LLM | scripts/init_prd.py:139 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'check_session_status'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/snail3d/clawd/ralph-skill/scripts/monitor_build.py:24 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'get_session_log'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/snail3d/clawd/ralph-skill/scripts/monitor_build.py:38 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run_loop'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/snail3d/clawd/ralph-skill/scripts/run_ralph_loop.py:157 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run_test'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/snail3d/clawd/ralph-skill/scripts/run_ralph_loop.py:78 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'commit'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/snail3d/clawd/ralph-skill/scripts/run_ralph_loop.py:97 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'commit'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/snail3d/clawd/ralph-skill/scripts/run_ralph_loop.py:98 | |
| HIGH | PRD content directly injected into Claude Code prompt The 'SKILL.md' documentation explicitly shows how 'PRD.json' content is injected into the 'claude exec' command using '$(cat PRD.json)'. The 'PRD.json' contains fields like 'sp' (starter prompt), task titles, descriptions, and acceptance criteria. If an attacker can control the content of 'PRD.json' (e.g., by providing a malicious PRD file), they can inject arbitrary instructions into the Claude Code LLM, potentially manipulating its behavior, exfiltrating data, or performing unauthorized actions. Implement robust sanitization or a structured API for passing PRD content to the LLM, rather than direct string concatenation. Ensure that user-controlled fields within the PRD cannot be interpreted as instructions by the LLM. Consider using a dedicated API for passing structured data to the LLM instead of embedding it directly in the prompt string. | LLM | SKILL.md:300 | |
| MEDIUM | 'session_id' passed unsanitized to 'process' command The 'scripts/monitor_build.py' script constructs arguments for the 'process' command using 'f"sessionId:{self.session_id}"'. The 'session_id' is taken directly from 'sys.argv[1]'. If the 'process' command is an external executable and does not properly sanitize or quote its arguments, an attacker could inject shell metacharacters or additional command-line arguments into 'session_id' to manipulate the 'process' command's behavior or execute arbitrary commands. The nature of the 'process' command (likely an internal 'claude_code' tool) makes the exact exploit path uncertain, but the pattern is risky. Ensure that 'session_id' is strictly validated (e.g., alphanumeric, specific length) before being passed to 'subprocess.run'. If 'process' is an internal API, confirm it handles arguments safely. If 'process' is a shell command, pass arguments as separate list items to 'subprocess.run' and avoid 'shell=True'. | LLM | scripts/monitor_build.py:22 | |
| INFO | Existing code included in LLM prompt, increasing exfiltration surface The 'scripts/run_ralph_loop.py' script's 'generate_claude_prompt' function reads existing file content ('existing_code') and includes it directly in the prompt sent to the LLM. While this is intended for context, if the LLM is compromised via prompt injection (as identified in another finding), this mechanism could be leveraged to exfiltrate sensitive code, configuration, or other data from the project directory by instructing the LLM to output or transmit this content. Implement strict access controls and data filtering for content included in LLM prompts, especially for sensitive files. Consider redacting or summarizing code snippets rather than including full file contents. Ensure that the LLM's output channels are monitored and restricted to prevent unauthorized data transmission. | LLM | scripts/run_ralph_loop.py:140 |
Scan History
Embed Code
[](https://skillshield.io/report/72a670fe47445696)
Powered by SkillShield