Trust Assessment
evm-wallet-skill received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 4 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User-Controlled Arguments to Node.js Scripts, Untrusted Dependency Installation via npm install, Untrusted Code Source via Git Clone/Pull.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 10/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Critical Transaction Confirmation Bypass Vulnerability via Prompt Injection The skill's core functionality involves high-value financial transactions (sending tokens, swapping, contract writes). The skill explicitly instructs the AI agent to *always* obtain user confirmation before executing these actions (e.g., "Only add `--yes` after the user explicitly confirms.", "ALWAYS show the quote first and get user confirmation before executing."). A sophisticated prompt injection attack could trick the LLM into ignoring these safety instructions and proceeding with transactions without explicit user consent, leading to significant financial loss. Implement robust, LLM-agnostic guardrails and external verification mechanisms for all financial transactions. This could involve a separate, user-facing confirmation UI that is not controlled by the LLM's output, or a multi-factor authentication step before sensitive commands are executed. The LLM's internal safety mechanisms must be extremely resilient to prompt injection attempts that try to bypass these confirmations. | LLM | SKILL.md:70 | |
| HIGH | Potential Command Injection via User-Controlled Arguments to Node.js Scripts The skill instructs the AI agent to execute Node.js scripts (`node src/*.js`) and pass user-controlled arguments (e.g., `<chain>`, `<to_address>`, `<amount>`, `<function_signature>`, `[args...]`). Without access to the `src/*.js` source code, it cannot be confirmed if these arguments are properly sanitized and escaped before being used in any internal shell commands or `eval` calls within the Node.js scripts. This creates a potential vector for command injection if an attacker can craft malicious input that is then executed by the underlying system. Ensure all user-provided arguments passed to `src/*.js` scripts are strictly validated, sanitized, and properly escaped or parameterized before being used in any system calls, subprocess executions, or dynamic code evaluations within the Node.js scripts. Avoid direct concatenation of user input into shell commands. | LLM | SKILL.md:80 | |
| HIGH | Untrusted Dependency Installation via npm install The skill instructs the AI agent to run `npm install` during initial setup and updates. This command fetches and installs potentially untrusted third-party packages from the npm registry. If any of these packages or their dependencies are malicious or contain vulnerabilities, it could lead to arbitrary code execution, data exfiltration, or other system compromises. The `package.json` file, which dictates these dependencies, is not provided for review. Implement strict dependency management practices. Pin all dependencies to exact versions in `package-lock.json` and commit it. Regularly audit dependencies for known vulnerabilities using tools like `npm audit`. Consider using a private package registry or vendoring dependencies for critical applications. | LLM | SKILL.md:36 | |
| HIGH | Untrusted Code Source via Git Clone/Pull The skill instructs the AI agent to clone and pull code from an external GitHub repository (`https://github.com/surfer77/evm-wallet-skill.git`). If this repository is compromised or taken over by a malicious actor, subsequent `git clone` or `git pull` operations could introduce malicious code into the skill's environment, leading to system compromise. Verify the integrity of the source repository. Consider mirroring the repository to a trusted, internal source or using cryptographic signatures to verify commits. Implement strict access controls and monitoring for the upstream repository. | LLM | SKILL.md:33 | |
| HIGH | Private Key Stored Locally, High Risk of Exfiltration The skill is designed to store a self-sovereign EVM private key locally at `~/.evm-wallet.json`. Although the skill includes warnings against sharing this file and sets `chmod 600` permissions, the mere presence of a plain-text or encrypted private key file on the host system, accessible by the AI agent, represents a significant data exfiltration risk. A compromised agent environment or a successful prompt injection could lead to the private key being read and transmitted, resulting in irreversible financial loss. Explore more secure methods for private key management, such as integration with hardware security modules (HSMs), secure enclaves, or external key management services that do not expose the private key directly to the AI agent's execution environment. Implement strict sandboxing and access controls to prevent unauthorized access to `~/.evm-wallet.json` by any process other than the intended wallet application. Strengthen LLM guardrails against any attempts to read or transmit the file content. | LLM | SKILL.md:15 |
Scan History
Embed Code
[](https://skillshield.io/report/27d6148e78998213)
Powered by SkillShield