Security Audit
parsehub-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
parsehub-automation received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned dependency in manifest, Broad, unrestricted tool execution instructions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned dependency in manifest The skill manifest specifies a dependency on 'rube' without a version constraint. This can lead to unexpected behavior or security vulnerabilities if a new, incompatible, or malicious version of 'rube' is introduced into the supply chain. An attacker could publish a malicious 'rube' package that would be automatically pulled by this skill. Pin the dependency to a specific version or version range in the skill manifest (e.g., `mcp: ["rube==1.2.3"]` or `mcp: ["rube>=1.0.0,<2.0.0"]`). | LLM | SKILL.md | |
| MEDIUM | Broad, unrestricted tool execution instructions The skill instructs the LLM to use `RUBE_MULTI_EXECUTE_TOOL` with any `tool_slug` discovered via `RUBE_SEARCH_TOOLS`, and also mentions `RUBE_REMOTE_WORKBENCH` for generic `run_composio_tool()` bulk operations. This grants the LLM broad, dynamic access to all Parsehub operations without specific restrictions or guidance on which tools are safe or appropriate for particular contexts. A compromised or misaligned LLM could leverage this to perform unintended or malicious actions via the Parsehub toolkit, such as deleting projects or scraping sensitive data from arbitrary URLs. Implement stricter controls or guidance within the skill to limit the scope of tools the LLM is permitted to execute. For example, provide a whitelist of allowed tool slugs, or specific use-case examples that do not imply arbitrary tool execution. Ensure the underlying Rube MCP and Composio tools have appropriate access controls and sandboxing. | LLM | SKILL.md:54 |
Scan History
Embed Code
[](https://skillshield.io/report/ed38131cdcd4f048)
Powered by SkillShield