Security Audit
textrazor-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
textrazor-automation received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Prompt Injection in RUBE_SEARCH_TOOLS 'use_case' parameter, Chained Vulnerability: Arbitrary Tool Execution via Compromised RUBE_SEARCH_TOOLS Output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 17, 2026 (commit 99e2a295). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Chained Vulnerability: Arbitrary Tool Execution via Compromised RUBE_SEARCH_TOOLS Output The skill's 'Core Workflow Pattern' explicitly dictates that the `tool_slug` and `arguments` for `RUBE_MULTI_EXECUTE_TOOL` are derived directly from the output of `RUBE_SEARCH_TOOLS`. If `RUBE_SEARCH_TOOLS` is vulnerable to prompt injection (as described in SS-LLM-001), an attacker could manipulate its output to suggest arbitrary tool slugs and arguments. The agent, following the prescribed workflow, would then execute these attacker-controlled tools with potentially malicious arguments via `RUBE_MULTI_EXECUTE_TOOL`. This creates a critical exploit path for arbitrary code or tool execution, leading to unauthorized actions, data manipulation, or system compromise, depending on the available tools and their permissions within the Rube MCP environment. 1. Address the prompt injection vulnerability in `RUBE_SEARCH_TOOLS` (SS-LLM-001 remediation). 2. Implement strict validation and allow-listing for `tool_slug` and `arguments` passed to `RUBE_MULTI_EXECUTE_TOOL`. Ensure that only expected and authorized tools/parameters can be executed, even if `RUBE_SEARCH_TOOLS` output is compromised. 3. Consider implementing user confirmation or additional authorization steps before executing tools derived from potentially untrusted LLM outputs, especially for actions with significant impact. | LLM | SKILL.md:59 | |
| HIGH | Potential Prompt Injection in RUBE_SEARCH_TOOLS 'use_case' parameter The skill instructs the user (or the agent) to provide a natural language string for the `use_case` parameter of `RUBE_SEARCH_TOOLS`. If the Rube MCP backend processes this `use_case` with an underlying Large Language Model (LLM) without robust sanitization or guardrails, a malicious user could inject prompts. This could manipulate the LLM's behavior, leading to the generation of unexpected or potentially malicious tool suggestions and schemas, which are then used in subsequent steps. Implement robust input sanitization and LLM guardrails for the `use_case` parameter in `RUBE_SEARCH_TOOLS` to prevent prompt injection. Consider using allow-lists for `use_case` values or strictly limiting the LLM's capabilities when processing this input to prevent it from generating arbitrary or malicious tool definitions. | LLM | SKILL.md:47 |
Scan History
Embed Code
[](https://skillshield.io/report/3f9f8b7e6df84532)
Powered by SkillShield