Trust Assessment
crunch-protocol-skill received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Command Injection via Unsanitized Profile Fields, Data Exfiltration via Arbitrary Wallet File Path, Ambiguous Parsing of User Requests Leading to Prompt Injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via Unsanitized Profile Fields The skill allows users to add, update, or remove profiles, which are stored in `profiles.json`. These profiles contain fields like `url` and `wallet` that are directly used to construct `crunch-cli` commands (e.g., `-u <value>`, `-w <value>`). If a user provides values containing shell metacharacters (e.g., `;`, `&&`, `|`, `$()`), these will be executed by the underlying shell, leading to arbitrary command execution. The skill explicitly mentions quoting for 'crunch name' but does not specify similar sanitization or quoting for profile fields when they are used in command construction. Implement robust sanitization and shell escaping for all user-provided values (especially `url`, `wallet`, `multisigAddress`) before they are used in `crunch-cli` commands. For example, use a library function that properly escapes arguments for the target shell, or ensure all arguments are passed as distinct parameters to `subprocess.run` (or equivalent) to prevent shell interpretation. | LLM | SKILL.md:40 | |
| HIGH | Data Exfiltration via Arbitrary Wallet File Path The skill allows users to define a `wallet` path within profiles, which is stored in `profiles.json`. This path is then used as an argument to `crunch-cli -w <path>`. If a malicious user sets the `wallet` path to an arbitrary sensitive file (e.g., `/etc/passwd`, `/app/secrets.env`), the `crunch-cli` command might attempt to read this file. Depending on the `crunch-cli`'s error handling or output, the contents of the sensitive file could be exfiltrated to the user. Validate user-provided file paths to ensure they are within an allowed, sandboxed directory. Do not allow arbitrary file paths for sensitive parameters like `wallet`. If possible, store keypair data securely rather than relying on file paths. Implement strict output sanitization to prevent sensitive file contents from being leaked in error messages. | LLM | SKILL.md:42 | |
| MEDIUM | Ambiguous Parsing of User Requests Leading to Prompt Injection The skill instructs the LLM to 'Parse the user request to identify: The action... The target... The name/identifier... Any additional parameters' for constructing `crunch-cli` commands. While specific phrases are directly mapped, for other requests, the LLM's interpretation of 'action', 'target', and 'additional parameters' is not strictly defined or constrained. A malicious user could craft a prompt that manipulates the LLM's parsing logic, causing it to identify unintended actions or parameters, potentially leading to the execution of `crunch-cli` commands not explicitly desired by the user or the skill developer. This is a risk inherent in open-ended natural language parsing for command generation. Implement stricter validation and sanitization of parsed actions, targets, and parameters. Define a clear grammar or schema for expected user inputs and reject anything that deviates. Use allow-lists for actions and targets. For parameters, ensure they are type-checked and sanitized before being passed to the CLI. Consider using a more structured approach for command generation rather than relying solely on open-ended LLM parsing. | LLM | SKILL.md:100 |
Scan History
Embed Code
[](https://skillshield.io/report/1e9b516ceaaa0328)
Powered by SkillShield