Security Audit
digicert-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
digicert-automation received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via RUBE_REMOTE_WORKBENCH.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via RUBE_REMOTE_WORKBENCH The skill documentation references `RUBE_REMOTE_WORKBENCH` for 'Bulk ops' using `run_composio_tool()`. This suggests the capability to execute code or scripts within a remote environment. If the arguments or the script content passed to `RUBE_REMOTE_WORKBENCH` can be influenced by untrusted input (e.g., a malicious user prompt), it could lead to arbitrary command execution or code injection within the Rube environment. The exact scope and sandboxing of `run_composio_tool()` are not detailed, but the term 'workbench' implies a programmable interface that could be exploited. Ensure that `RUBE_REMOTE_WORKBENCH` and `run_composio_tool()` strictly validate all inputs and operate within a highly sandboxed environment. If possible, avoid exposing direct code execution capabilities to LLM-controlled inputs. Provide clear documentation on the security implications and best practices for using this tool, emphasizing input sanitization and least privilege. | LLM | SKILL.md:82 |
Scan History
Embed Code
[](https://skillshield.io/report/bdf0a26449c0dc98)
Powered by SkillShield