Security Audit
front-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
front-automation received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 2 critical, 0 high, 1 medium, and 0 low severity. Key findings include Prompt Injection via user-controlled 'use_case', Potential Command Injection via RUBE_REMOTE_WORKBENCH, Excessive Permissions due to broad tool execution design.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via user-controlled 'use_case' The `RUBE_SEARCH_TOOLS` function takes a `queries` parameter with a `use_case` field, explicitly described as 'your specific Front task'. If this user-controlled string is directly interpolated into a prompt for an underlying LLM or search mechanism, it creates a direct prompt injection vulnerability. An attacker could craft malicious input in the `use_case` to manipulate the LLM's behavior, extract sensitive information, or generate unintended tool calls. Implement robust input sanitization and validation for the `use_case` parameter. If an LLM is used, ensure the `use_case` is passed as a distinct, untrusted variable rather than directly concatenated into the system prompt. Consider using a separate, sandboxed LLM call for interpreting user queries for tool search. | LLM | SKILL.md:44 | |
| CRITICAL | Potential Command Injection via RUBE_REMOTE_WORKBENCH The `RUBE_REMOTE_WORKBENCH` tool, used for 'Bulk ops' with `run_composio_tool()`, suggests a capability to execute arbitrary code or commands. If the arguments passed to `run_composio_tool()` are not strictly validated and sandboxed, an attacker could inject malicious commands, leading to remote code execution on the host system or the underlying Composio platform. The term 'workbench' often implies a powerful execution environment. Thoroughly review the implementation of `RUBE_REMOTE_WORKBENCH` and `run_composio_tool()`. Ensure that any user-supplied input is strictly validated against a whitelist of allowed operations and arguments. Implement strong sandboxing and privilege separation for any code execution environment. Avoid direct execution of user-provided strings as commands. | LLM | SKILL.md:69 | |
| MEDIUM | Excessive Permissions due to broad tool execution design The core workflow instructs the agent to first discover tools using `RUBE_SEARCH_TOOLS` and then execute any discovered tool using `RUBE_MULTI_EXECUTE_TOOL` with `TOOL_SLUG_FROM_SEARCH`. This design pattern grants the agent broad access to all capabilities exposed by the 'Front' toolkit. If the 'Front' toolkit includes sensitive or destructive operations (e.g., deleting data, sending messages to all users, modifying critical settings), the agent could be manipulated to perform these actions without specific authorization for each operation type. Implement a more granular permission model. Instead of allowing execution of 'any' discovered tool, define a whitelist of allowed tool slugs or categories for specific agent tasks. Introduce human-in-the-loop approval for sensitive operations. Ensure the 'Front' toolkit itself enforces least privilege principles for its exposed APIs. | LLM | SKILL.md:52 |
Scan History
Embed Code
[](https://skillshield.io/report/609708b85e4e5af9)
Powered by SkillShield