Security Audit
customgpt-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
customgpt-automation received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 1 medium, and 1 low severity. Key findings include Broad tool execution via `RUBE_MULTI_EXECUTE_TOOL`, Potentially arbitrary execution via `RUBE_REMOTE_WORKBENCH`, Potential prompt injection vector in `RUBE_SEARCH_TOOLS` `use_case`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 46/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potentially arbitrary execution via `RUBE_REMOTE_WORKBENCH` The skill suggests using `RUBE_REMOTE_WORKBENCH` for 'Bulk ops' with `run_composio_tool()`. The term 'workbench' and the ability to run `composio_tool()` imply a highly privileged and potentially arbitrary execution environment. If an attacker can manipulate the agent to use this tool, it could lead to severe compromise, including unauthorized data manipulation, system configuration changes, or even remote code execution within the Composio ecosystem. Avoid exposing 'workbench' or arbitrary execution environments to agents. If such a tool is necessary, implement extremely strict sandboxing, input validation, and authorization. Clearly define and limit the scope of `run_composio_tool()` to prevent arbitrary actions. | LLM | SKILL.md:80 | |
| HIGH | Broad tool execution via `RUBE_MULTI_EXECUTE_TOOL` The skill instructs the agent to use `RUBE_MULTI_EXECUTE_TOOL`, which allows dynamic execution of any tool slug discovered via `RUBE_SEARCH_TOOLS`. This grants the agent broad permissions to perform any operation exposed by the Customgpt toolkit through Rube MCP. A compromised agent could leverage this to perform unauthorized actions on Customgpt or other connected services. Implement strict access controls and authorization checks within Rube MCP for each tool. Ensure that the agent's permissions are granular and limited to only necessary operations. Consider a whitelist of allowed tool slugs for specific use cases. | LLM | SKILL.md:49 | |
| MEDIUM | Potential prompt injection vector in `RUBE_SEARCH_TOOLS` `use_case` The `RUBE_SEARCH_TOOLS` function takes a `use_case` parameter, which is a natural language query. If Rube MCP processes this `use_case` with an internal LLM, it could be vulnerable to prompt injection. A malicious agent or user could craft a `use_case` to manipulate the internal LLM's behavior, potentially leading to unintended tool suggestions, information disclosure, or other undesirable outcomes within the Rube MCP system. Implement robust input sanitization and validation for natural language inputs like `use_case`. If an internal LLM processes this input, employ prompt engineering techniques (e.g., system prompts, input chaining, output parsing) to mitigate injection risks. Consider limiting the scope of what the internal LLM can influence based on user input. | LLM | SKILL.md:39 | |
| LOW | Reliance on external Rube MCP server introduces supply chain risk The skill explicitly depends on an external Rube MCP server at `https://rube.app/mcp`. While this is a standard way to integrate, it introduces a supply chain risk. If the `rube.app` domain or the MCP server itself were compromised, the integrity and security of the tools and operations performed by the agent could be jeopardized. Implement strong monitoring for external dependencies. Consider using private or self-hosted MCP instances for critical applications. Regularly audit the security posture of third-party services. | LLM | SKILL.md:25 |
Scan History
Embed Code
[](https://skillshield.io/report/3c5f3bb335d0770e)
Powered by SkillShield