Security Audit
hashnode-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
hashnode-automation received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via RUBE_REMOTE_WORKBENCH.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 17, 2026 (commit 99e2a295). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via RUBE_REMOTE_WORKBENCH The skill documentation suggests using `RUBE_REMOTE_WORKBENCH` with `run_composio_tool()` for 'Bulk ops'. Unlike `RUBE_MULTI_EXECUTE_TOOL` which explicitly states arguments must be 'schema-compliant', there are no mentioned constraints or schema requirements for `run_composio_tool()` within the workbench context. This lack of explicit constraint, combined with the term 'workbench' (implying a flexible execution environment) and 'bulk ops' (often requiring custom logic), suggests a potential for command injection. An LLM, when prompted for 'bulk ops', might generate arbitrary code or commands to pass to `run_composio_tool()` if it interprets the 'workbench' as a permissive execution environment. If `run_composio_tool()` allows the execution of arbitrary code or commands provided by the LLM, it could lead to unauthorized system access, data manipulation, or exfiltration. Clarify the exact capabilities and input constraints of `RUBE_REMOTE_WORKBENCH` and `run_composio_tool()`. If it allows arbitrary code execution, implement strict sandboxing and input validation. If it's intended for specific, pre-defined operations, ensure the LLM cannot inject arbitrary commands by providing a clear, restrictive schema for its inputs. | LLM | SKILL.md:76 |
Scan History
Embed Code
[](https://skillshield.io/report/ed0b99f246ae8404)
Powered by SkillShield