Security Audit
uptimerobot-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
uptimerobot-automation received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential Prompt Injection via RUBE_SEARCH_TOOLS `use_case`, Arbitrary Code Execution via RUBE_REMOTE_WORKBENCH, Broad Tool Execution Capability via RUBE_MULTI_EXECUTE_TOOL.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Code Execution via RUBE_REMOTE_WORKBENCH The skill explicitly mentions and encourages the use of `RUBE_REMOTE_WORKBENCH` with `run_composio_tool()`. The term "workbench" and the function `run_composio_tool()` strongly suggest the capability to execute arbitrary code or commands within the Rube MCP environment. This presents a critical command injection vulnerability, allowing an attacker to execute malicious code, access system resources, or exfiltrate data if they can control the arguments passed to this tool. This also represents excessive permissions, as it grants broad, potentially unrestricted, execution capabilities. Restrict the capabilities of `RUBE_REMOTE_WORKBENCH` and `run_composio_tool()` to a strictly defined and sandboxed set of operations. Implement strong input validation and whitelisting for any commands or code executed. Ensure that the workbench environment is isolated from sensitive system resources. If arbitrary code execution is intended, it must be performed in a highly secure, ephemeral, and resource-constrained sandbox. | LLM | SKILL.md:84 | |
| HIGH | Potential Prompt Injection via RUBE_SEARCH_TOOLS `use_case` The skill instructs users to provide natural language input for the `use_case` parameter in `RUBE_SEARCH_TOOLS`. If the Rube MCP system directly feeds this untrusted natural language into an underlying LLM without proper sanitization, sandboxing, or prompt engineering, an attacker could craft malicious `use_case` queries to manipulate the host LLM's behavior, extract sensitive information, or generate unintended outputs. Implement robust input sanitization and validation for the `use_case` parameter within the Rube MCP system. Ensure that natural language inputs are properly isolated or transformed before being used in LLM prompts to prevent manipulation. Consider using a dedicated, sandboxed LLM for tool search queries. | LLM | SKILL.md:43 | |
| HIGH | Broad Tool Execution Capability via RUBE_MULTI_EXECUTE_TOOL The skill guides the user to use `RUBE_MULTI_EXECUTE_TOOL` to execute tools discovered via `RUBE_SEARCH_TOOLS`. This mechanism allows for the execution of potentially any tool exposed by the Rube MCP system. While the skill advises using "schema-compliant args", the sheer breadth of available tools, some of which might have sensitive operations (e.g., data modification, external API calls), represents an excessive permission model. An attacker who can influence tool discovery or execution parameters could leverage this to perform unauthorized actions. Implement fine-grained access control for individual tools or tool categories within the Rube MCP. Ensure that the LLM agent's permissions are scoped down to only the necessary tools for its intended function. Regularly audit the tools exposed by Rube MCP and their potential impact. | LLM | SKILL.md:59 | |
| MEDIUM | Untrusted Input in Manifest Description The `description` field in the skill's manifest is untrusted input provided by the skill developer. If the host LLM processes this description directly as part of its prompt or context without proper sanitization or isolation, it could be vulnerable to prompt injection. While the current description does not contain an obvious exploit, the mechanism exists for a malicious developer to insert instructions that manipulate the host LLM. Ensure that all untrusted fields in skill manifests, such as `description`, are treated as data, not instructions. Implement strict sanitization and encoding before incorporating them into LLM prompts. Consider using a separate, sandboxed LLM or a non-LLM mechanism for processing metadata. | LLM | Manifest:1 |
Scan History
Embed Code
[](https://skillshield.io/report/101f8aeca337f8da)
Powered by SkillShield