Security Audit
vercel-labs/agent-browser:skills/dogfood
github.com/vercel-labs/agent-browserTrust Assessment
vercel-labs/agent-browser:skills/dogfood received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Command Injection via Unsanitized User Input in Bash Commands, Sensitive Data Exposure via Arbitrary Output Directory, Excessive Permissions for Bash Tool.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on March 6, 2026 (commit aba23531). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via Unsanitized User Input in Bash Commands The skill constructs bash commands by directly interpolating user-provided values for `TARGET_URL`, `SESSION`, and `OUTPUT_DIR` without proper sanitization or escaping. This allows an attacker to inject arbitrary shell commands by crafting malicious input for these parameters. For example, providing `example.com; rm -rf /` as the `TARGET_URL` or `/tmp/foo; rm -rf /` as the `OUTPUT_DIR` could lead to arbitrary code execution or data deletion. Implement robust sanitization and shell escaping for all user-provided inputs (`TARGET_URL`, `SESSION`, `OUTPUT_DIR`, `EMAIL`, `PASSWORD`) before interpolating them into bash commands. Consider using a dedicated library for shell argument escaping or passing arguments as separate parameters to the `agent-browser` tool if it supports it, rather than direct string interpolation. Additionally, restrict the `Bash` tool permissions to only allow specific commands and arguments, rather than `Bash(agent-browser:*)`. | LLM | SKILL.md:58 | |
| HIGH | Sensitive Data Exposure via Arbitrary Output Directory The skill saves potentially sensitive authentication state to `auth-state.json` within a user-specified `{OUTPUT_DIR}`. If an attacker provides a path to a publicly accessible directory (e.g., a web server's document root or a shared network drive), this could lead to the exposure of authentication tokens, cookies, or other session data, compromising user accounts or system integrity. Restrict the `{OUTPUT_DIR}` parameter to a safe, sandboxed location (e.g., a temporary directory within the agent's workspace) that is not directly controllable by the user. If user-specified output locations are necessary, ensure they are validated against a whitelist of allowed paths or constrained to a secure, non-public directory. Avoid saving sensitive authentication state to user-controlled locations. | LLM | SKILL.md:80 | |
| MEDIUM | Excessive Permissions for Bash Tool The declared permission `Bash(agent-browser:*)` grants the skill the ability to execute the `agent-browser` command with any arbitrary arguments. While intended for legitimate use, this broad permission significantly increases the attack surface, enabling the command injection vulnerabilities identified. It allows the skill to execute commands that might not be strictly necessary for its intended function, especially if combined with unsanitized user input. Refine the `Bash` tool permissions to be as granular as possible. Instead of `Bash(agent-browser:*)`, specify exact commands and argument patterns that are allowed (e.g., `Bash(agent-browser open <url>)`, `Bash(agent-browser --session <session_name> snapshot)`). This 'least privilege' approach would mitigate the impact of potential command injection attempts by restricting what an injected command can do. | LLM | Manifest (frontmatter JSON):1 |
Scan History
Embed Code
[](https://skillshield.io/report/42aaea04dfe01343)
Powered by SkillShield