Security Audit
stack-exchange-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
stack-exchange-automation received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection / Excessive Permissions via RUBE_REMOTE_WORKBENCH.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 17, 2026 (commit 99e2a295). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection / Excessive Permissions via RUBE_REMOTE_WORKBENCH The skill instructs the LLM to use `RUBE_REMOTE_WORKBENCH` with `run_composio_tool()` for 'Bulk ops'. The naming convention ('workbench', 'run_composio_tool()') strongly suggests that this tool might allow the execution of arbitrary code or commands within the Rube MCP environment. If `run_composio_tool()` can be supplied with arguments that lead to shell execution, `eval`, `exec`, or subprocess calls, it represents a significant command injection vulnerability. This also implies excessive permissions, as the LLM could be prompted to execute commands beyond the intended scope of Stack Exchange operations. The lack of detailed documentation for this powerful tool within the skill context exacerbates the risk, as the LLM might be guided to use it without full understanding of its capabilities and potential dangers. Provide clear and explicit documentation for `RUBE_REMOTE_WORKBENCH` and `run_composio_tool()`, detailing its exact capabilities, security implications, and any restrictions. If it allows arbitrary code execution, consider restricting its use, removing it, or ensuring it operates within a heavily sandboxed and monitored environment. Implement strict input validation and authorization checks for any commands or code executed through this tool. Ensure the LLM is explicitly instructed on safe and intended uses only. | Static | SKILL.md:76 |
Scan History
Embed Code
[](https://skillshield.io/report/fc4fd1ddfc744f59)
Powered by SkillShield