Security Audit
crustdata-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
crustdata-automation received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Prompt Injection into Rube Tools via 'use_case' field, Potential Arbitrary Tool Execution via RUBE_REMOTE_WORKBENCH.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Arbitrary Tool Execution via RUBE_REMOTE_WORKBENCH The skill instructs the LLM to use `RUBE_REMOTE_WORKBENCH` with `run_composio_tool()` for 'Bulk ops'. This suggests the ability to execute arbitrary Composio tools within a remote workbench environment. If `run_composio_tool()` allows for the execution of unconstrained or highly privileged operations, this could lead to command injection or excessive permissions, allowing the LLM to perform actions beyond its intended scope or execute malicious code. The scope and sandboxing of `run_composio_tool()` are not defined within the skill, making it a black box with high potential for abuse. Clarify the exact capabilities and limitations of `RUBE_REMOTE_WORKBENCH` and `run_composio_tool()`. Ensure that the remote workbench environment is strictly sandboxed and that `run_composio_tool()` only allows execution of explicitly whitelisted and safe operations. If arbitrary tool execution is intended, document the security implications and required safeguards. | LLM | SKILL.md:68 | |
| MEDIUM | Potential Prompt Injection into Rube Tools via 'use_case' field The skill instructs the LLM to populate the 'use_case' field in `RUBE_SEARCH_TOOLS` with natural language descriptions (e.g., 'your specific Crustdata task'). If the Rube tool interprets this 'use_case' field as instructions or uses it in a natural language processing context, an attacker could craft malicious input to manipulate the Rube tool's behavior, leading to unintended actions or information disclosure. The skill does not specify any input validation or sanitization for this field. Implement strict input validation and sanitization for natural language fields passed to Rube tools. If the Rube tool is LLM-powered, consider using techniques like input templating, instruction-following models, or content filtering to mitigate prompt injection risks. Clearly document how user input should be handled for these fields. | LLM | SKILL.md:40 |
Scan History
Embed Code
[](https://skillshield.io/report/9643d57611d8a7bd)
Powered by SkillShield