Security Audit
shanraisshan/claude-code-best-practice:.claude/skills/agent-browser
github.com/shanraisshan/claude-code-best-practiceTrust Assessment
shanraisshan/claude-code-best-practice:.claude/skills/agent-browser received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Sensitive environment variable access: $USER, Browser state saving can expose credentials and session data, Local file access enabled for browser context.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 24, 2026 (commit a4f7f2ec). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Browser state saving can expose credentials and session data The `agent-browser state save <file>` command allows saving the entire browser session state, including cookies, local storage, and potentially authentication tokens or other sensitive data. If an AI agent is instructed to log into a site and then save its state, an attacker could potentially retrieve this saved state file (e.g., `auth.json`) and gain unauthorized access to the authenticated session, leading to credential harvesting and data exfiltration. Implement strict policies on when and where browser state can be saved. Avoid saving full browser state if possible, or ensure that saved state files are encrypted and stored in a secure, access-controlled location. The AI agent should be trained to never save or transmit sensitive state files without explicit, secure user confirmation. | LLM | SKILL.md:100 | |
| MEDIUM | Sensitive environment variable access: $USER Access to sensitive environment variable '$USER' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | .claude/skills/agent-browser/SKILL.md:84 | |
| MEDIUM | Local file access enabled for browser context The `agent-browser --allow-file-access open file:///path/to/document.pdf` command explicitly enables the browser to access local files. While necessary for specific tasks like opening local PDFs or HTML, this significantly increases the attack surface. If an attacker can trick the AI agent into opening a malicious local HTML file (e.g., by writing a crafted file to disk first or providing a malicious `file://` URL), it could lead to local file system access, execution of local scripts within the browser's context, or other browser-based vulnerabilities that could compromise the host system. Restrict the use of `--allow-file-access` to only trusted, predefined file paths or disable it by default. Implement robust input validation and sanitization for any file paths provided to this command. Consider running the browser in a more isolated sandbox environment when local file access is enabled. | LLM | SKILL.md:135 | |
| INFO | Broad Bash execution permission with potential for command injection The skill declares `Bash(agent-browser:*)` permission, allowing the AI agent to execute any command starting with `agent-browser` via Bash. While the `agent-browser` tool is expected to sanitize its arguments, if user-controlled input (e.g., URLs, text to fill, selectors) is directly concatenated into a shell command by the `agent-browser` tool without proper escaping, it could lead to command injection. This is a general risk associated with broad shell execution permissions and user-controlled input, even if no specific exploit is demonstrated in the provided examples. Ensure that the `agent-browser` tool itself rigorously sanitizes all user-provided arguments (URLs, text, selectors, etc.) before constructing and executing any internal shell commands or interacting with the underlying browser engine. The AI agent should also be instructed to sanitize user input before passing it to `agent-browser` commands, especially when dealing with potentially malicious or untrusted input. | LLM | Manifest |
Scan History
Embed Code
[](https://skillshield.io/report/b2c7ae93f98c9d92)
Powered by SkillShield