Trust Assessment
permissions-broker received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include API Key Exfiltration via Dynamic Base URL, LLM Manipulation for Malicious Upstream Requests.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | API Key Exfiltration via Dynamic Base URL The skill's functions (`createBrokerRequest`, `pollBrokerStatus`, `executeBrokerRequest` in JavaScript/TypeScript; `create_request`, `await_result`, `execute_request` in Python) accept `baseUrl` as a parameter. The user's `apiKey` is then sent in the `Authorization` header to this `baseUrl`. If an attacker can manipulate the `baseUrl` parameter (e.g., through prompt injection to the LLM), the `apiKey` will be exfiltrated to an attacker-controlled server. The `baseUrl` for the Permissions Broker API should be hardcoded within the skill's implementation or retrieved from a trusted, immutable configuration source. It should not be exposed as a parameter that can be influenced by untrusted user input or LLM generation. | LLM | SKILL.md:190 | |
| MEDIUM | LLM Manipulation for Malicious Upstream Requests The skill is designed to construct `upstream_url`, `method`, `headers`, `body`, and `consent_hint` based on user input (mediated by the LLM). A malicious user could craft a prompt to the LLM that causes it to generate parameters for a harmful `upstream_url` (e.g., deleting data, accessing sensitive information) or a misleading `consent_hint` designed to trick the end-user into approving an unauthorized action. While the Permissions Broker requires user approval, the LLM itself is manipulated into proposing the malicious action. Implement robust validation and sanitization for `upstream_url`, `method`, `headers`, `body`, and `consent_hint` before passing them to the broker. The LLM should be explicitly instructed to scrutinize these values for suspicious patterns (e.g., `DELETE` methods on sensitive paths, unusual hosts, or overly broad `consent_hint`s). The agent should always present the full, unredacted `upstream_url` and `method` to the user for approval, making it clear what action is being requested, even if a `consent_hint` is provided. | LLM | SKILL.md:200 |
Scan History
Embed Code
[](https://skillshield.io/report/e650feb5de534378)
Powered by SkillShield