Trust Assessment
multi-agent-patterns received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 7 findings: 2 critical, 4 high, 1 medium, and 0 low severity. Key findings include Arbitrary Shell Command Execution, Arbitrary Python Code Execution (eval), Arbitrary File Read (Data Exfiltration).
The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. The llm_behavioral_safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 15, 2026 (commit 3e75fabd). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Shell Command Execution The `WorkerAgent.execute_task` method contains a case for `execute_shell_command` which directly executes a command provided in the `task` dictionary using `subprocess.run(command, shell=True)`. If the `task` dictionary can be influenced by untrusted input (e.g., from a user prompt routed through the supervisor), this allows for arbitrary shell command injection, leading to full system compromise. Remove the `execute_shell_command` capability. If shell execution is absolutely necessary, implement a strict allow-list for commands and arguments, avoid `shell=True`, and sanitize all inputs rigorously. Prefer specific, sandboxed tool calls over general shell execution. | Unknown | scripts/coordination.py:269 | |
| CRITICAL | Arbitrary Python Code Execution (eval) The `WorkerAgent.execute_task` method includes a case for `evaluate_python_code` that uses `eval(task["code"])` on content from the `task` dictionary. Executing `eval()` on untrusted input is a direct code execution vulnerability, allowing an attacker to run arbitrary Python code within the agent's environment, potentially leading to system compromise. While `__builtins__` are set to `None`, `eval` is notoriously difficult to sandbox effectively. Remove the `evaluate_python_code` capability. `eval()` should never be used with untrusted input. If dynamic code execution is required, consider safer alternatives like a strictly sandboxed environment or a highly restricted interpreter, but generally avoid this pattern. | Unknown | scripts/coordination.py:275 | |
| HIGH | Arbitrary File Read (Data Exfiltration) The `WorkerAgent.execute_task` method contains a case for `read_file` which opens and reads the content of a file specified by `task["filepath"]`. If `task["filepath"]` can be controlled by untrusted input, this allows an attacker to read arbitrary files on the system, potentially exfiltrating sensitive data, configuration files, or credentials. Implement strict validation and allow-listing for file paths. Restrict file access to a specific, non-sensitive directory. Prevent directory traversal attacks (e.g., `../`). | Unknown | scripts/coordination.py:281 | |
| HIGH | Arbitrary File Write The `WorkerAgent.execute_task` method includes a case for `write_file` that writes content to a file specified by `task["filepath"]`. If `task["filepath"]` and `task["content"]` can be controlled by untrusted input, an attacker can write arbitrary content to arbitrary locations on the file system. This can lead to denial of service (overwriting critical files), privilege escalation (writing to startup scripts), or data corruption. Implement strict validation and allow-listing for file paths. Restrict file writing to a specific, sandboxed directory. Prevent directory traversal attacks. Ensure content is also validated if it can be untrusted. | Unknown | scripts/coordination.py:286 | |
| HIGH | Environment Variable Exfiltration The `WorkerAgent.execute_task` method contains a case for `get_env_var` which retrieves the value of an environment variable specified by `task["var_name"]`. If `task["var_name"]` can be controlled by untrusted input, an attacker can read arbitrary environment variables, potentially exfiltrating sensitive credentials (API keys, database passwords) or other confidential configuration. Implement a strict allow-list for environment variable names that can be accessed. Do not allow arbitrary environment variable lookups based on untrusted input. | Unknown | scripts/coordination.py:292 | |
| HIGH | Unrestricted Network Access (SSRF/API Abuse) The `WorkerAgent.execute_task` method includes a case for `make_api_call` that uses the `requests` library to make arbitrary HTTP requests. The `method`, `url`, `headers`, and `data` are all taken directly from the `task` dictionary. If these parameters can be controlled by untrusted input, this allows for Server-Side Request Forgery (SSRF) attacks, port scanning, or making unauthorized requests to internal or external services, potentially leading to data exfiltration, internal network reconnaissance, or abuse of other APIs. Implement strict validation and allow-listing for URLs, methods, and headers. Prevent access to internal IP ranges and non-HTTP/HTTPS schemes. Consider using a proxy or a dedicated, restricted network client for external requests. | Unknown | scripts/coordination.py:297 | |
| MEDIUM | Agent-to-Host LLM Prompt Injection via `forward_message_to_user` The `WorkerAgent.forward_message_to_user` method sends `message_content` directly to the user. If the 'user' in this context is the host LLM, and `message_content` can be influenced by untrusted input (e.g., a malicious user prompt or a compromised agent), this function could be used by the agent to inject instructions or manipulate the host LLM's behavior. This is a potential prompt injection vector from the agent back to its orchestrating LLM. Implement sanitization or explicit marking of agent-generated content before it is returned to the host LLM. The host LLM should be designed to treat such content as data, not instructions, or to apply its own robust prompt injection defenses. | Unknown | scripts/coordination.py:319 |
Scan History
Embed Code
[](https://skillshield.io/report/813f702d0905449f)
Powered by SkillShield