Security Audit
computer-use-agents
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
computer-use-agents received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Direct Desktop Control via pyautogui, Screen Capture Capability for Potential Data Exfiltration, Direct Shell Command Execution via 'bash' Tool.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct Shell Command Execution via 'bash' Tool The `AnthropicComputerUse` class explicitly defines and intends to implement a `BetaToolBash20241022` tool, which grants direct access to the underlying operating system's shell. If the inputs to this `bash` tool are derived from untrusted LLM outputs or user input without rigorous validation and sandboxing, it presents a critical command injection vulnerability, allowing an attacker to execute arbitrary commands on the host system. Crucially, always run agents with `bash` access in a highly restricted, isolated, and ephemeral sandboxed environment. Implement strict input validation and sanitization for any commands passed to the `bash` tool. Consider using an allowlist of safe commands and arguments rather than a blocklist. Limit the capabilities of the user running the bash commands within the sandbox. | LLM | SKILL.md:120 | |
| HIGH | Direct Desktop Control via pyautogui The `ComputerUseAgent` class uses `pyautogui` to perform mouse clicks, keyboard typing, and scrolling based on dictionary inputs. If these inputs are derived from untrusted LLM outputs or user input without proper validation and sandboxing, an attacker could manipulate the agent to perform arbitrary actions on the host system, such as opening applications, typing sensitive data, or navigating to malicious websites. While the skill later recommends sandboxing, the code snippet itself does not enforce it. Implement robust sandboxing for the agent's execution environment, as described in the 'Sandboxed Environment Pattern' section of this skill. Ensure all inputs to `execute_action` are strictly validated and sanitized if they originate from untrusted sources (e.g., LLM responses). | LLM | SKILL.md:50 | |
| HIGH | Screen Capture Capability for Potential Data Exfiltration The `capture_screenshot` method in `ComputerUseAgent` (using `pyautogui.screenshot()`) and the `scrot` command used in `_handle_computer_action` within `AnthropicComputerUse` capture the entire screen content. If these captured images (base64 encoded) are subsequently sent to an untrusted LLM or an external service, sensitive information visible on the user's desktop could be exfiltrated. Ensure that screen captures are only performed within a strictly sandboxed environment where no sensitive data is present. Implement strict data governance policies for any captured images, ensuring they are not transmitted to untrusted parties or stored insecurely. Consider redacting sensitive areas of the screen before processing. | LLM | SKILL.md:39 |
Scan History
Embed Code
[](https://skillshield.io/report/5edf2cfd38c831cc)
Powered by SkillShield