Trust Assessment
onchain received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User Input to CLI Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User Input to CLI Arguments The 'onchain' skill defines a command-line interface (CLI) with various commands that accept user-supplied arguments (e.g., token names, wallet addresses, search queries, configuration names). If an AI agent constructs and executes these commands by directly interpolating untrusted user input without proper shell escaping, an attacker could inject arbitrary shell commands. This could lead to unauthorized command execution on the host system where the agent is running. While the skill itself does not implement the 'onchain' tool, it exposes an interface that, if used unsafely by the integrating agent, creates a significant security vulnerability. AI agents integrating this skill must implement robust input sanitization and shell-escaping mechanisms (e.g., using `shlex.quote` in Python) for all user-provided arguments before constructing and executing `onchain` commands. The skill documentation should also include a prominent warning about this risk and provide guidance on safe command construction. | LLM | SKILL.md:42 |
Scan History
Embed Code
[](https://skillshield.io/report/a559800a13405453)
Powered by SkillShield