Trust Assessment
browser-use received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via cdpUrl in gateway config.patch, Potential Prompt Injection in 'Tasks' API 'task' parameter.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via cdpUrl in gateway config.patch The skill instructs the host LLM to execute a `gateway config.patch` command where the `cdpUrl` obtained from an external API response is directly interpolated into a shell command. If the `browser-use.com` API is compromised or returns a maliciously crafted `cdpUrl` (e.g., containing shell metacharacters, malformed JSON, or escape sequences), it could lead to arbitrary command execution on the host system or unauthorized manipulation of the `gateway` configuration. The `cdpUrl` must be strictly validated and properly escaped before being inserted into the `gateway config.patch` command. Ideally, use a programmatic API for configuration updates that handles escaping, rather than direct shell command interpolation. If shell execution is unavoidable, ensure robust sanitization or use a safer method like passing the `cdpUrl` as an environment variable or file content if the `gateway` tool supports it. | LLM | SKILL.md:51 | |
| HIGH | Potential Prompt Injection in 'Tasks' API 'task' parameter The skill describes an API endpoint for 'Tasks' where a `task` string is sent to a `browser-use-llm` subagent. If the skill is designed to accept user-provided input for this `task` parameter without proper sanitization or guardrails, a malicious user could craft a prompt injection attack. This could manipulate the `browser-use-llm` to perform unintended actions, disclose information, or bypass security controls within the browser automation context. Implement robust input validation and sanitization for any user-provided input that populates the `task` parameter. Consider using a separate, hardened LLM for prompt moderation or employing techniques like prompt templating, input filtering, and output validation to mitigate injection risks. Clearly define the boundaries of what the `browser-use-llm` can access and do. | LLM | SKILL.md:109 |
Scan History
Embed Code
[](https://skillshield.io/report/ad14b1edbc7372ea)
Powered by SkillShield