Trust Assessment
antigravity-swarm received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 25 findings: 9 critical, 12 high, 2 medium, and 2 low severity. Key findings include Arbitrary command execution, Dangerous call: subprocess.run(), Dangerous call: subprocess.Popen().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings25
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/0xnagato/antigravity-swarm/scripts/compactor.py:61 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/0xnagato/antigravity-swarm/scripts/dispatch_agent.py:50 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/0xnagato/antigravity-swarm/scripts/dispatch_agent.py:165 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/0xnagato/antigravity-swarm/scripts/orchestrator.py:101 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/0xnagato/antigravity-swarm/scripts/planner.py:166 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/0xnagato/antigravity-swarm/scripts/ultrawork_loop.py:35 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/0xnagato/antigravity-swarm/scripts/ultrawork_loop.py:42 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/0xnagato/antigravity-swarm/scripts/ultrawork_loop.py:50 | |
| CRITICAL | Arbitrary Shell Command Execution from LLM Output The `scripts/dispatch_agent.py` script directly executes shell commands parsed from the LLM's output using `subprocess.run(command, shell=True, ...)`. The `command` variable is extracted from the `<<RUN_COMMAND>>...<<END_COMMAND>>` tag without any sanitization. An attacker can craft a prompt that causes the LLM to output malicious shell commands, leading to arbitrary code execution on the host system with the privileges of the agent. The `shell=True` argument exacerbates this by allowing command chaining. Implement strict sanitization or an allow-list for commands. Avoid `shell=True`. If `shell=True` is necessary, ensure the command is constructed from trusted, validated components and not directly from untrusted LLM output. Consider using a sandboxed environment for command execution. | LLM | scripts/dispatch_agent.py:60 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'compact_file'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/0xnagato/antigravity-swarm/scripts/compactor.py:61 | |
| HIGH | Dangerous call: subprocess.Popen() Call to 'subprocess.Popen()' detected in function 'main'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/0xnagato/antigravity-swarm/scripts/dispatch_agent.py:165 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'parse_and_execute_side_effects'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/0xnagato/antigravity-swarm/scripts/dispatch_agent.py:50 | |
| HIGH | Dangerous call: subprocess.Popen() Call to 'subprocess.Popen()' detected in function '_run_real'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/0xnagato/antigravity-swarm/scripts/orchestrator.py:101 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'main'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/0xnagato/antigravity-swarm/scripts/planner.py:166 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'main'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/0xnagato/antigravity-swarm/scripts/ultrawork_loop.py:42 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'main'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/0xnagato/antigravity-swarm/scripts/ultrawork_loop.py:50 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'main'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/0xnagato/antigravity-swarm/scripts/ultrawork_loop.py:35 | |
| HIGH | Arbitrary File Write to Host Filesystem from LLM Output The `scripts/dispatch_agent.py` script allows the LLM to specify an arbitrary file path (`path`) and content for writing using the `<<WRITE_FILE path="...">>...<<END_WRITE>>` tag. The `os.makedirs` call ensures directories are created, and `open(path, 'w', ...)` writes the content. This allows an attacker to instruct the LLM to write to any location on the filesystem accessible to the agent, potentially overwriting critical system files, injecting malicious scripts, or exfiltrating data by writing it to a publicly accessible location. Restrict file write operations to a designated, sandboxed directory. Implement strict validation of file paths to prevent path traversal (e.g., `../`). Consider using a virtualized filesystem or containerization. | LLM | scripts/dispatch_agent.py:45 | |
| HIGH | User-controlled `task` argument enables Prompt Injection The `dispatch_agent.py` script takes a `task` argument directly from `sys.argv`. This `task` is then embedded into the `full_prompt` sent to the `gemini` LLM. Since the `dispatch_agent.py` script is designed to parse and execute `<<WRITE_FILE>>` and `<<RUN_COMMAND>>` tags from the LLM's output, a malicious user can craft the `task` argument to directly include these tags, or to prompt the LLM to output them, leading to arbitrary file writes and command execution (as described in SS-LLM-003 and SS-LLM-005). Implement robust input sanitization for the `task` argument before it is passed to the LLM. Filter out or escape characters that could be interpreted as part of the `<<WRITE_FILE>>` or `<<RUN_COMMAND>>` tags. Ideally, the LLM's output should be validated against an allow-list of expected actions and parameters, rather than directly executed. | LLM | scripts/dispatch_agent.py:110 | |
| HIGH | LLM-generated `subagents.yaml` can embed malicious instructions The `scripts/planner.py` script generates `subagents.yaml` based on a user-provided `mission` prompt. The `subagents.yaml` contains `prompt` fields for each sub-agent. These prompts are later passed to `dispatch_agent.py` by `orchestrator.py`. If an attacker can craft the `mission` prompt to coerce the LLM into embedding `<<WRITE_FILE>>` or `<<RUN_COMMAND>>` tags within the `prompt` field of a sub-agent in the generated `subagents.yaml`, this will lead to arbitrary file writes and command execution when `orchestrator.py` dispatches that sub-agent via `dispatch_agent.py`. Implement robust input sanitization for the `mission` argument. Additionally, the `prompt` fields within the generated `subagents.yaml` should be validated and sanitized before being passed to `dispatch_agent.py`. The `dispatch_agent.py` itself needs to be secured as per the above recommendations. | LLM | scripts/planner.py:100 | |
| HIGH | Autonomous loop feeds potentially malicious LLM output back into prompts The `scripts/ultrawork_loop.py` script creates an autonomous feedback loop. In case of failure, it constructs a new `mission` prompt that includes content from `original_mission`, `findings.md`, and `progress.md`. Both `findings.md` and `progress.md` can contain content generated by LLMs (e.g., from `dispatch_agent.py` or `compactor.py`). If a previous LLM output (maliciously crafted by an attacker) has injected `<<WRITE_FILE>>` or `<<RUN_COMMAND>>` tags, or instructions to generate them, into these markdown files, they will be fed back into the `planner.py`'s LLM prompt, perpetuating and potentially escalating the prompt injection attack. Implement strict sanitization of `findings.md` and `progress.md` content before it is used to construct new LLM prompts. Consider using a separate, trusted LLM to summarize or filter these files for sensitive content before feeding them back into the main agent loop. | LLM | scripts/ultrawork_loop.py:50 | |
| MEDIUM | Unpinned Python dependency version Requirement 'rich>=13.0.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | skills/0xnagato/antigravity-swarm/requirements.txt:1 | |
| MEDIUM | Unpinned Python dependency version Requirement 'pyyaml>=6.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | skills/0xnagato/antigravity-swarm/requirements.txt:2 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/0xnagato/antigravity-swarm/package.json | |
| LOW | Dependencies pinned to minimum versions, not exact The `requirements.txt` file specifies dependencies using minimum versions (e.g., `rich>=13.0.0`, `pyyaml>=6.0`). While better than no pinning, this allows for newer, potentially incompatible or vulnerable versions to be installed if they are released. For production systems, exact version pinning is generally recommended to ensure reproducibility and prevent unexpected behavior or security regressions from upstream updates. Pin dependencies to exact versions (e.g., `rich==13.0.0`) and use a lock file (e.g., `pip freeze > requirements.lock`) to ensure reproducible builds. Regularly audit and update dependencies. | LLM | requirements.txt:1 |
Scan History
Embed Code
[](https://skillshield.io/report/ccaa015afb73fdca)
Powered by SkillShield