Security Audit
sundial-org/awesome-openclaw-skills:skills/agent-orchestrator
github.com/sundial-org/awesome-openclaw-skillsTrust Assessment
sundial-org/awesome-openclaw-skills:skills/agent-orchestrator received a trust score of 13/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Prompt Injection via Dynamic Sub-agent SKILL.md Generation, Path Traversal and Prompt Injection in Sub-agent Task Prompt, Potential Command Injection in External Script Calls.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on March 3, 2026 (commit 6d998e00). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Dynamic Sub-agent SKILL.md Generation The skill explicitly states that `SKILL.md` files for sub-agents are generated dynamically using parameters like 'Agent's specific role and objective'. If these parameters are derived from untrusted user input without proper sanitization, a malicious user could inject arbitrary instructions into the sub-agent's `SKILL.md`. This would allow the attacker to fully control the sub-agent's behavior, potentially leading to data exfiltration, unauthorized actions, or further prompt injection against other tools or models. Implement strict input validation and sanitization for all parameters used in dynamic `SKILL.md` generation. Consider using templating engines that escape user input by default, or explicitly filter out prompt injection keywords and control characters. Ensure that the generated `SKILL.md` is reviewed for malicious content before being used by the sub-agent. | LLM | SKILL.md:44 | |
| HIGH | Path Traversal and Prompt Injection in Sub-agent Task Prompt The `Task` tool's `prompt` argument is constructed using an f-string that includes `{agent_path}`. If `agent_path` can be influenced by untrusted input (e.g., a user-provided agent name or task identifier), a malicious actor could inject path traversal sequences (e.g., `../../`) to instruct the sub-agent to read or write files outside its designated workspace. Additionally, if `agent_path` contains newline characters or other prompt-modifying strings, it could lead to prompt injection, altering the sub-agent's instructions and potentially overriding its intended behavior. Sanitize `agent_path` to prevent path traversal sequences (e.g., `..`, `/`) and prompt injection characters (e.g., newlines, backticks, specific keywords). Use a robust path joining function that normalizes paths and restricts them to a base directory. Consider passing file paths as explicit arguments to the sub-agent tool rather than embedding them directly in the prompt, if the tool supports it. | LLM | SKILL.md:68 | |
| HIGH | Potential Command Injection in External Script Calls The skill describes using external Python scripts via shell commands: `python3 scripts/create_agent.py <agent-name> --workspace <path>` and `python3 scripts/dissolve_agents.py --workspace <path>`. If `<agent-name>` or `<path>` are derived from untrusted user input without proper sanitization (e.g., escaping shell metacharacters), a malicious actor could inject arbitrary shell commands. This could lead to remote code execution on the host system. When executing external commands with user-controlled arguments, use a safe method like `subprocess.run` with `shell=False` and pass arguments as a list. Ensure all user-provided inputs are strictly validated and sanitized to prevent shell metacharacters from being interpreted as commands. | Static | SKILL.md:35 | |
| MEDIUM | Risk of Malicious Content or Data Exfiltration from Sub-agent Outputs Sub-agents are instructed to write all outputs to their `outbox/` directory, and the orchestrator then collects and processes these outputs (e.g., `validate_outputs`, `consolidated_results.extend(outputs)`). If a sub-agent is compromised (e.g., through prompt injection), it could write malicious code, sensitive data, or instructions into its `outbox/`. If the orchestrator subsequently executes, interprets, or displays this content without robust sanitization, it could lead to further compromise, data exfiltration, or unintended actions. The `validate_outputs` function is a critical control point, but its implementation is not detailed. Implement strict validation and sanitization for all content read from sub-agent `outbox/` directories. Avoid executing or directly interpreting any content from these directories unless it's explicitly designed for execution and has been thoroughly vetted. If displaying content to users, ensure proper escaping to prevent XSS or other rendering attacks. Restrict the types of files sub-agents can write if possible. | Static | SKILL.md:107 |
Scan History
Embed Code
[](https://skillshield.io/report/e2698743a40623c8)
Powered by SkillShield