Trust Assessment
next-browser received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Prompt Injection in `task_description` for Downstream AI, Potential Command Injection if `curl` examples are executed directly with unsanitized input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Prompt Injection in `task_description` for Downstream AI The skill exposes a `task_description` field which is directly passed as a prompt to the Nextbrowser autonomous browser subagent. A malicious user could craft this `task_description` to manipulate the subagent into performing unintended actions, such as data exfiltration from browser sessions, unauthorized actions on logged-in accounts (e.g., social media, banking), or misuse of the browser environment. While this is a prompt injection against a downstream AI (the Nextbrowser subagent), the host LLM is instructed to facilitate this injection by passing the untrusted prompt, making it a significant security concern for the overall system. Implement robust input validation and sanitization for the `task_description` field before passing it to the Nextbrowser API. Consider adding guardrails, content moderation filters, or a human-in-the-loop approval process for sensitive tasks. Clearly inform users about the risks of providing untrusted or malicious prompts to the subagent. | LLM | SKILL.md:90 | |
| MEDIUM | Potential Command Injection if `curl` examples are executed directly with unsanitized input The `SKILL.md` provides `curl` command examples that include placeholders like `<profile-name>`, `<profile-id>`, `<task-id>`, etc. If the skill's underlying implementation were to directly execute these `curl` commands via a shell (e.g., `subprocess.run(..., shell=True)` in Python) and substitute these placeholders with untrusted user input without proper shell escaping, it could lead to command injection. An attacker could inject shell metacharacters into a user-controlled parameter to execute arbitrary commands on the host system. Ensure that any parameters derived from user input are strictly validated and properly escaped for shell execution if `subprocess` with `shell=True` is used. The recommended approach is to use HTTP client libraries (e.g., `requests` in Python) to construct and send API requests, as these libraries inherently handle proper escaping of data within the HTTP protocol, mitigating shell injection risks. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/57e6685dd616f0e3)
Powered by SkillShield