Trust Assessment
agent-avengers received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 3 critical, 0 high, 0 medium, and 1 low severity. Key findings include Arbitrary command execution, Dangerous call: os.system(), Node lockfile missing.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/oozoofrog/agent-avengers/scripts/monitor.py:147 | |
| CRITICAL | Dangerous call: os.system() Call to 'os.system()' detected in function 'watch_mode'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/oozoofrog/agent-avengers/scripts/monitor.py:147 | |
| CRITICAL | Unsanitized LLM output leads to Prompt/Command Injection in OpenClaw calls The `assemble.py` script constructs prompts for `sessions_spawn` and `sessions_send` using `agent['description']`, `agent['inputs']`, and `agent['expected_output']`. These values are expected to come from an LLM's decomposition of user input. The `execute.py` script then embeds these potentially untrusted strings directly into JavaScript code snippets for OpenClaw execution, using backtick-quoted string literals. If the LLM-generated content contains backticks (`) or other special characters, it can break out of the JavaScript string literal, allowing an attacker to inject arbitrary JavaScript code into the OpenClaw runtime. This constitutes a command injection vulnerability in the OpenClaw environment, facilitated by prompt injection against the initial task-decomposing LLM. Implement robust input sanitization and escaping for all LLM-generated content before embedding it into executable code or prompts for other LLMs. Specifically, for JavaScript string literals, escape backticks (`), dollar signs (`$`), and backslashes (`\`) within the LLM-generated `task` and `message` content. Consider using a dedicated templating engine or a safer method for passing complex data to OpenClaw primitives that avoids direct string concatenation for executable code. If OpenClaw provides a structured way to pass task descriptions (e.g., as a JSON object), prefer that over embedding into a string. | LLM | scripts/execute.py:102 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/oozoofrog/agent-avengers/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/a6070021b14dc063)
Powered by SkillShield