Trust Assessment
llm_wallet received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unpinned dependency in manifest, Private key exposed via command-line argument, Potential command injection via user-controlled arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned dependency in manifest The skill's manifest specifies the 'llm-wallet-mcp' package without a version constraint. This means that the latest version of the package will always be installed, which could introduce breaking changes, security vulnerabilities, or even malicious code if the package maintainer's account is compromised. It makes the skill vulnerable to supply chain attacks. Pin the 'llm-wallet-mcp' package to a specific, known-good version in the manifest to ensure deterministic and secure installations. For example, '"package": "llm-wallet-mcp@1.2.3"'. | LLM | SKILL.md:1 | |
| HIGH | Private key exposed via command-line argument The `llm-wallet import --private-key <key>` command instructs the agent to accept a private key directly as a command-line argument. Passing sensitive credentials like private keys via command-line arguments is insecure because they can be exposed in process lists (e.g., `ps aux`), shell history, or system logs. If the agent's environment is compromised or logs commands, the private key could be exfiltrated. Modify the `llm-wallet` tool to accept private keys via more secure methods, such as an interactive prompt (e.g., `read -s`), environment variables, or a securely managed file. Update the skill documentation to reflect the secure input method. | LLM | SKILL.md:34 | |
| MEDIUM | Potential command injection via user-controlled arguments Several `llm-wallet` commands accept user-controlled input that could be vulnerable to command injection if the underlying `llm-wallet-mcp` tool does not properly sanitize or escape these arguments before executing them in a shell. Specifically, `llm-wallet pay <url>`, `llm-wallet register-api <url>`, and `llm-wallet call-api <tool_name> --params <json>` take URLs, tool names, and JSON strings. A malicious user could craft these inputs to execute arbitrary commands on the host system if the tool's implementation is flawed. Ensure that the `llm-wallet-mcp` tool rigorously sanitizes and escapes all user-provided inputs (URLs, JSON, tool names) before using them in any shell commands or system calls. Implement robust input validation to reject malformed or suspicious inputs. Consider using libraries or frameworks that automatically handle command argument escaping. | LLM | SKILL.md:64 |
Scan History
Embed Code
[](https://skillshield.io/report/94ddb589ebe289fe)
Powered by SkillShield