Security Audit
codeinterpreter-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
codeinterpreter-automation received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Prompt Injection via Rube Tool Arguments, Arbitrary Tool Execution via RUBE_MULTI_EXECUTE_TOOL, Dependency on External Rube MCP Introduces Supply Chain Risk.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Tool Execution via RUBE_MULTI_EXECUTE_TOOL The skill's core functionality involves executing tools dynamically via `RUBE_MULTI_EXECUTE_TOOL` based on `tool_slug` and `arguments` obtained from `RUBE_SEARCH_TOOLS`. If an attacker can manipulate the `tool_slug` or `arguments` (e.g., by influencing the `use_case` in `RUBE_SEARCH_TOOLS` or directly injecting into `RUBE_MULTI_EXECUTE_TOOL` arguments), they could potentially execute arbitrary tools available through the Rube MCP. This constitutes a severe command injection and excessive permissions vulnerability, allowing for unauthorized actions within the Codeinterpreter environment or other connected systems. The `RUBE_REMOTE_WORKBENCH` also suggests a powerful execution capability. Implement strict access control and validation for `tool_slug` and `arguments`. Ensure that only explicitly allowed tools and argument structures can be executed. Consider sandboxing the execution environment for tools. Limit the scope of tools available to the LLM. For `RUBE_REMOTE_WORKBENCH`, ensure its capabilities are tightly constrained and inputs are thoroughly sanitized. | LLM | SKILL.md:51 | |
| HIGH | Potential Prompt Injection via Rube Tool Arguments The skill instructs the LLM to use `RUBE_SEARCH_TOOLS` with a `use_case` parameter and `RUBE_MULTI_EXECUTE_TOOL` with `arguments`. If these parameters are populated directly from untrusted user input without sanitization, a malicious user could inject instructions into the underlying LLM that processes these tool calls, potentially manipulating its behavior or extracting sensitive information. Implement robust input validation and sanitization for all user-controlled inputs passed to `use_case` and `arguments` fields. Consider using allow-lists for `use_case` or ensuring that the LLM processing these inputs is sandboxed and has strict guardrails against instruction following. | LLM | SKILL.md:37 | |
| HIGH | Dependency on External Rube MCP Introduces Supply Chain Risk The skill explicitly depends on an external Managed Control Plane (MCP) hosted at `https://rube.app/mcp`. This introduces a significant supply chain risk. If the `rube.app` service is compromised, or if malicious tools are introduced into its ecosystem, this skill could inadvertently become a vector for executing those malicious tools or actions within the agent's environment. The skill has no control over the security posture or integrity of the external MCP. Acknowledge and mitigate the inherent risks of relying on external, dynamically loaded tools. Implement strong vetting processes for the MCP provider. Consider implementing runtime monitoring and sandboxing for executed tools to limit their potential impact. Regularly audit the MCP for changes or suspicious activity. | LLM | SKILL.md:18 |
Scan History
Embed Code
[](https://skillshield.io/report/3c46293120c2fe59)
Powered by SkillShield