Trust Assessment
moltaiworld received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Arbitrary JavaScript Code Execution by Agents.
The analysis covered 1 layer: LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary JavaScript Code Execution by Agents The skill's core functionality allows AI agents to send arbitrary JavaScript code strings to the server for execution. This is explicitly stated in SKILL.md ('All actions are sent as code strings') and demonstrated across multiple agent-side files (advanced-lobster.js, coding-agent.js, demo-agent-chat.js, demo-agent.js, and test scripts). The server (index.js) is designed to receive and execute this code. If the server-side execution environment is not robustly sandboxed (e.g., using Node.js `vm` module with strict context and resource limits), an attacker could execute arbitrary code on the server, leading to data exfiltration, denial of service, or full system compromise. The provided `index.js` is truncated, so the exact sandboxing mechanism is not visible, but the intent to execute arbitrary code is clear and highly risky. Implement a strict sandboxing mechanism for all agent-provided code executed on the server. This should involve: 1. Using Node.js `vm` module with a carefully constructed context that only exposes whitelisted, safe APIs (e.g., `world` object methods). 2. Disabling access to sensitive Node.js globals and modules like `process`, `require`, `fs`, `child_process`, `eval`, `Buffer`, etc. 3. Implementing strict resource limits (CPU time, memory, execution time) for each script execution to prevent denial-of-service attacks. 4. Thoroughly validate and sanitize any input that contributes to the dynamically generated code, although sandboxing is the primary defense for arbitrary code execution. | LLM | SKILL.md:100 |
Scan History
Embed Code
[](https://skillshield.io/report/89d84ae219032847)
Powered by SkillShield