Trust Assessment
sponge-wallet received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include Excessive Permissions: Direct Financial Asset Manipulation, Excessive Permissions: Amazon Checkout for Physical Goods Exfiltration, Data Exfiltration via Arbitrary HTTP Requests (x402_fetch).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 18/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Excessive Permissions: Direct Financial Asset Manipulation The skill provides extensive capabilities for managing cryptocurrency, including direct transfers, swaps, and withdrawals to arbitrary addresses. If an AI agent using this skill is compromised or manipulated (e.g., via prompt injection), it could be instructed to drain funds from the associated wallet or execute unauthorized trades on platforms like Polymarket, leading to significant financial loss. Implement robust human-in-the-loop approval mechanisms for all financial transactions, especially withdrawals and transfers to new or unapproved addresses. Consider granular permissions for the API key, limiting its scope to only necessary operations. Provide clear warnings to users about the financial risks associated with this skill. | LLM | SKILL.md:102 | |
| CRITICAL | Excessive Permissions: Amazon Checkout for Physical Goods Exfiltration The skill includes functionality for Amazon checkout, allowing the agent to initiate purchases and specify a shipping address. A compromised or manipulated AI agent could be instructed to purchase physical goods and have them shipped to an attacker-controlled address, leading to financial loss and exfiltration of physical assets. Require explicit human approval for all Amazon checkout operations, especially when the shipping address is new or modified. Implement strict allowlists for shipping addresses or integrate with existing user profiles to prevent arbitrary address changes. Provide clear warnings to users about the risks of unauthorized purchases. | LLM | SKILL.md:118 | |
| HIGH | Data Exfiltration via Arbitrary HTTP Requests (x402_fetch) The `x402_fetch` tool allows the AI agent to make arbitrary HTTP requests to any URL, with custom methods, headers, and body content. This powerful primitive could be abused by a compromised agent to exfiltrate sensitive data (e.g., internal context, other files the agent has access to, or even the `SPONGE_API_KEY` itself if the LLM is not careful) to an attacker-controlled server. It effectively provides a general-purpose outbound communication channel. Strictly limit the domains or IP ranges that `x402_fetch` can access. Implement content filtering or data loss prevention (DLP) mechanisms on outbound requests. Ensure that the LLM's internal context and sensitive environment variables are never included in `x402_fetch` requests unless explicitly and safely designed. Consider requiring human approval for requests to new or untrusted URLs. | LLM | SKILL.md:115 | |
| MEDIUM | Credential Handling: API Key Exposure Risk via Shell Commands The documentation suggests storing the `SPONGE_API_KEY` in a local file (`~/.spongewallet/credentials.json`) and then exporting it as an environment variable using `jq` and `export` commands. While this is a common practice, if the AI agent is manipulated to log environment variables, or if the `jq` command itself were to be influenced by untrusted input (though less likely in this specific context), it could lead to the exposure of the API key. The `x402_fetch` tool, if misused, could also be instructed to send this environment variable to an external service. Emphasize the importance of securing the `credentials.json` file with strict file permissions. Implement logging redaction for sensitive environment variables. Ensure the LLM is explicitly instructed never to log or transmit the `SPONGE_API_KEY` through any channel, especially when using tools like `x402_fetch`. | LLM | SKILL.md:64 |
Scan History
Embed Code
[](https://skillshield.io/report/36426fc6c164854d)
Powered by SkillShield