Security Audit
PhantomBuster Automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
PhantomBuster Automation received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Broad access to PhantomBuster operational data and export functionality, hCaptcha solver with arbitrary proxy support increases attack surface.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Broad access to PhantomBuster operational data and export functionality The skill provides access to PhantomBuster tools that can retrieve extensive operational data, including agent metadata, script metadata, organization resource usage, agent execution history, and a comprehensive agent usage report in CSV format. Specifically, the `PHANTOMBUSTER_GET_ORGS_EXPORT_AGENT_USAGE` tool allows for direct export of potentially sensitive operational statistics. If the LLM is compromised, an attacker could leverage these tools to exfiltrate a wide range of data from the connected PhantomBuster account. Implement strict access controls and user consent mechanisms before allowing the LLM to invoke data export or broad data retrieval tools. Ensure the LLM's context window is not used to store or transmit sensitive data retrieved by these tools without explicit user approval. Consider granular permissions for different PhantomBuster API calls if possible. | LLM | SKILL.md:59 | |
| MEDIUM | hCaptcha solver with arbitrary proxy support increases attack surface The `PHANTOMBUSTER_POST_HCAPTCHA` tool allows the LLM to solve hCaptcha challenges, which can be a legitimate automation task. However, it accepts `siteKey`, `pageUrl`, and optionally `proxy` and `userAgent` parameters. If the LLM is compromised, an attacker could instruct it to solve captchas on malicious websites or use the `proxy` parameter to route traffic through an attacker-controlled server, potentially for anonymization, data interception, or to bypass security measures on other systems. This broad capability, especially the arbitrary proxy, increases the attack surface. Implement strict validation and allow-listing for `pageUrl` and `proxy` parameters if possible, especially when invoked by an LLM. Consider restricting the `proxy` parameter to a predefined set of trusted proxies or removing the ability for the LLM to specify arbitrary proxies. Ensure user consent is required for hCaptcha solving on potentially sensitive or unknown URLs. | LLM | SKILL.md:69 |
Scan History
Embed Code
[](https://skillshield.io/report/65ba23138ff763f5)
Powered by SkillShield