Security Audit
claid-ai-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
claid-ai-automation received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Broad access to Rube MCP tools and workbench.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Broad access to Rube MCP tools and workbench The skill is designed to interact with the Rube MCP, which acts as a gateway to various toolkits. The documentation explicitly details the use of `RUBE_SEARCH_TOOLS` to discover available tools and `RUBE_MULTI_EXECUTE_TOOL` to execute them. Furthermore, `RUBE_REMOTE_WORKBENCH` with `run_composio_tool()` is mentioned for 'Bulk ops'. This grants the skill, and by extension, a potentially compromised agent, the ability to discover and execute *any* tool available through the Rube MCP that the user has connected, not just those related to Claid AI. The `RUBE_REMOTE_WORKBENCH` also suggests arbitrary code execution capabilities within the Composio environment. This broad access significantly increases the attack surface if the agent's inputs can be manipulated (e.g., via prompt injection). 1. **Least Privilege Principle**: If possible, restrict the skill's access to only the `claid_ai` toolkit or specific Claid AI tools within the Rube MCP. 2. **Input Validation/Sanitization**: Implement strict validation and sanitization of any user-provided input that influences `RUBE_SEARCH_TOOLS` queries or `RUBE_MULTI_EXECUTE_TOOL` parameters (e.g., `tool_slug`, `arguments`). 3. **Agent Guardrails**: Ensure the host LLM has robust guardrails to prevent prompt injection attempts that could lead to the execution of unintended tools or actions via `RUBE_MULTI_EXECUTE_TOOL` or `RUBE_REMOTE_WORKBENCH`. 4. **Review `RUBE_REMOTE_WORKBENCH` usage**: Carefully assess the necessity and security implications of using `RUBE_REMOTE_WORKBENCH` and `run_composio_tool()`, as it implies arbitrary code execution. If not strictly necessary, remove or restrict its usage. | LLM | SKILL.md:55 |
Scan History
Embed Code
[](https://skillshield.io/report/a7461d0c8bb4e9c0)
Powered by SkillShield