Security Audit
opencage-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
opencage-automation received a trust score of 71/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via RUBE_REMOTE_WORKBENCH, Broad Tool Execution via RUBE_MULTI_EXECUTE_TOOL, Potential Data Exfiltration via Tool Execution or Workbench.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via RUBE_REMOTE_WORKBENCH The skill documentation mentions `RUBE_REMOTE_WORKBENCH` for 'Bulk ops' using `run_composio_tool()`. The term 'workbench' often implies an environment capable of executing arbitrary code or commands. Without clear sandboxing or limitations specified in the documentation, this could allow for command injection if the LLM is prompted to execute malicious code within this workbench. This also implies excessive permissions if the workbench has broad system access. Clarify the security model and sandboxing of `RUBE_REMOTE_WORKBENCH`. Ensure `run_composio_tool()` is strictly limited to predefined, safe operations and does not allow arbitrary code execution. If arbitrary code execution is intended, ensure it runs in a highly isolated and restricted environment with minimal privileges. | LLM | SKILL.md:70 | |
| HIGH | Potential Data Exfiltration via Tool Execution or Workbench The ability to execute arbitrary tools via `RUBE_MULTI_EXECUTE_TOOL` and potentially arbitrary code via `RUBE_REMOTE_WORKBENCH` creates a significant risk of data exfiltration. If the LLM is prompted to use a tool that can read local files, access environment variables, or make external network requests to an attacker-controlled server, sensitive data could be leaked. This risk is amplified by the broad permissions implied by these execution mechanisms. Implement strict data egress policies. Ensure that tools and the workbench environment are sandboxed and cannot access sensitive local data or make unauthorized external network connections. Monitor and log all data access and network activity initiated by the skill. | LLM | SKILL.md:49 | |
| MEDIUM | Broad Tool Execution via RUBE_MULTI_EXECUTE_TOOL The `RUBE_MULTI_EXECUTE_TOOL` allows the LLM to execute any tool discovered via `RUBE_SEARCH_TOOLS`. This grants the skill broad permissions equivalent to the sum of all available Opencage tools. If any underlying Opencage tool has excessive permissions (e.g., filesystem access, network requests beyond Opencage API), this skill could be leveraged to perform unauthorized actions. The dynamic nature of tool discovery increases the risk of an LLM being prompted to use a tool with unintended side effects. Implement fine-grained access control for individual tools or tool categories. Ensure that the LLM's access to tools is restricted to only those necessary for its intended function. Clearly document the permissions and potential side effects of each tool. | LLM | SKILL.md:49 |
Scan History
Embed Code
[](https://skillshield.io/report/657c5840130d21ba)
Powered by SkillShield