Trust Assessment
cabin received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via `node` script arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via `node` script arguments The skill's documentation suggests executing `node` scripts (`src/balance.js`, `src/send.js`) with arguments (`<deposit_address>`, `<amount_usdc>`) that are directly derived from external API responses. If the API is compromised or returns malicious data, these arguments could be crafted to inject arbitrary shell commands into the `node` execution. The LLM is responsible for constructing and executing these commands, making it vulnerable if it does not properly sanitize or escape the external input. Ensure all arguments passed to `node` scripts, especially those derived from external API responses or user input, are thoroughly sanitized and properly escaped to prevent command injection. Consider using a safe execution mechanism that does not directly concatenate untrusted strings into shell commands, or validate inputs against expected formats (e.g., valid cryptocurrency addresses, numeric amounts). | LLM | SKILL.md:75 |
Scan History
Embed Code
[](https://skillshield.io/report/f9b96857d6288d46)
Powered by SkillShield