Trust Assessment
browser received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 3 high, 0 medium, and 0 low severity. Key findings include Skill grants full Bash shell access, Arbitrary command execution possible via Bash permission, Sensitive data exfiltration possible via Bash permission.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill grants full Bash shell access The skill's manifest declares `Bash` as an allowed tool. This grants the AI agent the ability to execute arbitrary shell commands on the host system. This is an extremely broad permission that can lead to severe security vulnerabilities, including command injection, data exfiltration, and credential harvesting. This permission is rarely justified for a skill whose primary function is browser automation. Restrict tool access to the absolute minimum necessary. If shell execution is required, consider using a more constrained execution environment or a dedicated, sandboxed tool that only exposes specific, safe operations. Avoid granting raw `Bash` access. | LLM | SKILL.md | |
| HIGH | Arbitrary command execution possible via Bash permission Due to the `Bash` permission, the AI agent can construct and execute any shell command. This allows for arbitrary command injection, enabling an attacker to bypass the intended `browser` CLI tool and execute malicious commands directly on the host system. This could lead to system compromise, data manipulation, or denial of service. Remove or severely restrict `Bash` permissions. If specific shell commands are needed, wrap them in a dedicated, sandboxed tool with strict input validation and whitelisting of commands and arguments. | LLM | SKILL.md | |
| HIGH | Sensitive data exfiltration possible via Bash permission The `Bash` permission allows the AI agent to read, modify, and transmit any data accessible to the user running the agent. This includes sensitive files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`), environment variables, and potentially data from the local file system. The `SKILL.md` explicitly mentions `BROWSERBASE_API_KEY` and `BROWSERBASE_PROJECT_ID` being stored in a `.env` file, which an agent with `Bash` access could easily read and exfiltrate. Remove or severely restrict `Bash` permissions. Implement strict data access controls and monitor outbound network connections. Avoid storing sensitive credentials in easily accessible `.env` files if the agent has shell access. | LLM | SKILL.md:18 | |
| HIGH | API keys and other credentials vulnerable to harvesting via Bash With `Bash` permissions, the AI agent can access and potentially exfiltrate credentials stored on the system. The `SKILL.md` explicitly mentions `BROWSERBASE_API_KEY` and `BROWSERBASE_PROJECT_ID` being stored in a `.env` file. An agent with `Bash` access could easily read these environment variables or file contents and transmit them to an unauthorized third party. Remove or severely restrict `Bash` permissions. Store credentials in secure vaults or use environment variables that are not directly readable by arbitrary shell commands if possible. Implement robust secrets management practices. | LLM | SKILL.md:18 |
Scan History
Embed Code
[](https://skillshield.io/report/d5608a63ddc3e427)
Powered by SkillShield