Security Audit
autonomous-agent-patterns
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
autonomous-agent-patterns received a trust score of 0/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 9 findings: 2 critical, 5 high, 2 medium, and 0 low severity. Key findings include Command Injection via `subprocess.run(shell=True)`, Command Injection via `subprocess.getoutput` with untrusted `workspace`, Path Traversal in `ReadFileTool`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings9
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via `subprocess.run(shell=True)` The `execute_sandboxed` method uses `subprocess.run` with `shell=True`. While `validate_command` attempts to whitelist commands, `shell=True` allows an attacker to chain commands using semicolons or other shell metacharacters if the initial command is whitelisted. For example, if 'ls' is allowed, an attacker could provide 'ls; rm -rf /' to execute arbitrary commands. The `validate_command` only checks the base command, not its arguments when `shell=True` is used. Avoid `shell=True` in `subprocess.run`. Instead, pass commands as a list of arguments (e.g., `['ls', '-l']`) and ensure all arguments are properly validated and sanitized. If `shell=True` is absolutely necessary, implement a much more robust and comprehensive command sanitization and validation logic that accounts for shell metacharacters and command chaining. | Static | SKILL.md:240 | |
| CRITICAL | Command Injection via `subprocess.getoutput` with untrusted `workspace` The `_capture_workspace` method constructs shell commands using an f-string with `workspace` and executes them via `subprocess.getoutput`. If the `workspace` variable can be controlled by an attacker (e.g., through a malicious checkpoint or agent state), they can inject arbitrary shell commands. For example, `workspace = '/tmp; rm -rf /'` would lead to deletion of the root directory. Do not use f-strings directly with `subprocess.getoutput` or `subprocess.run` when any part of the string comes from an untrusted source. Instead, use `subprocess.run` with `shell=False` and pass arguments as a list, or ensure `workspace` is strictly validated to prevent injection (e.g., by checking it's a valid, non-malicious path). | Static | SKILL.md:460 | |
| HIGH | Path Traversal in `ReadFileTool` The `ReadFileTool` directly uses the `path` argument in `open(path, 'r')`. If `path` is controlled by an untrusted source (e.g., an LLM's tool call arguments based on user input), an attacker can specify arbitrary paths (e.g., `../../../../etc/passwd`) to read sensitive files outside the intended working directory. While a `SandboxedExecution.validate_path` is shown later, it's not explicitly applied to this tool's execution, making it a direct vulnerability. Before opening any file, validate the `path` argument to ensure it is within an allowed, sandboxed directory. Implement and enforce a robust path validation mechanism (like `SandboxedExecution.validate_path`) for all file operations, or use a dedicated file system abstraction that enforces boundaries. | Static | SKILL.md:90 | |
| HIGH | Path Traversal in `EditFileTool` Similar to `ReadFileTool`, the `EditFileTool` uses the `path` argument directly in `open(path, 'r')` and `open(path, 'w')`. If `path` is controlled by an untrusted source, an attacker can read or overwrite arbitrary files on the system, potentially leading to data corruption, privilege escalation, or denial of service. The provided snippets do not show explicit path validation for this tool. Implement and enforce strict path validation for all file write/edit operations, ensuring that the `path` is confined to a designated, sandboxed workspace. This is critical for preventing unauthorized file modification or deletion. | Static | SKILL.md:170 | |
| HIGH | Path Traversal in `ContextManager.add_file` and `add_folder` The `ContextManager` methods `add_file` and `add_folder` directly use the `path` argument to read file contents. If `path` is controlled by an untrusted source, an attacker can read arbitrary files from the filesystem and potentially exfiltrate their content by having the agent include them in the LLM prompt. No explicit path validation is shown for these methods. Ensure that any `path` provided to `add_file` or `add_folder` is rigorously validated to prevent path traversal. This validation should confirm the path is within an allowed, sandboxed directory before reading its contents. | Static | SKILL.md:390 | |
| HIGH | Server-Side Request Forgery (SSRF) in `BrowserTool.open_url` The `open_url` method navigates the browser to an arbitrary `url` provided as an argument. If this `url` can be controlled by an untrusted source, an attacker could force the agent's browser to access internal network resources, sensitive local files (e.g., `file:///etc/passwd`), or perform actions on other websites, potentially leading to information disclosure or unauthorized actions. The captured screenshot and page content could then be exfiltrated. Implement strict URL validation to whitelist allowed domains or protocols, and block access to internal IP addresses or `file://` schemes. Consider running the browser in a highly isolated environment or on a dedicated machine if it needs to access untrusted URLs. | Static | SKILL.md:300 | |
| HIGH | Server-Side Request Forgery (SSRF) in `ContextManager.add_url` The `ContextManager.add_url` method uses `requests.get(url)` to fetch content from an arbitrary URL. If `url` is controlled by an untrusted source, an attacker can perform SSRF attacks, forcing the agent to make requests to internal network resources, cloud metadata endpoints, or other sensitive services. The fetched content is then added to the agent's context, potentially leading to data exfiltration. Implement strict URL validation for `add_url` to prevent SSRF. This should include whitelisting allowed domains, blocking internal IP ranges, and disallowing sensitive protocols. Consider using a dedicated, isolated service for fetching external URLs. | Static | SKILL.md:410 | |
| MEDIUM | Environment Variable Leakage via `os.environ` in Sandboxed Execution The `execute_sandboxed` method passes `os.environ` directly to the `env` parameter of `subprocess.run`. If a command injection vulnerability is successfully exploited (as identified by SS-CMD-001), the injected command could access and exfiltrate sensitive environment variables (e.g., API keys, database credentials) that are present in the agent's environment. Instead of passing `os.environ` directly, explicitly define a minimal set of environment variables required for the sandboxed command. Filter out any sensitive variables or set them to safe, empty values. This limits the attack surface for data exfiltration. | Static | SKILL.md:247 | |
| MEDIUM | Prompt Injection via Untrusted Context Data The `ContextManager.format_for_prompt` method aggregates content from various sources (files, URLs, diagnostics) and directly inserts it into the LLM prompt. If any of this content is untrusted or malicious (e.g., a file or URL fetched by an attacker-controlled path/URL), it could contain instructions designed to manipulate the LLM's behavior, leading to prompt injection. This could cause the LLM to ignore previous instructions, generate harmful content, or misuse tools. Implement robust sanitization or escaping mechanisms for all untrusted content before it is included in the LLM prompt. Consider using specific LLM API features for structured input that separate instructions from data, or employ techniques like XML/JSON tagging to delineate untrusted content and instruct the LLM not to interpret it as commands. | LLM | SKILL.md:429 |
Scan History
Embed Code
[](https://skillshield.io/report/eb1244537ccda5bf)
Powered by SkillShield