Security Audit
mistral_ai-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
mistral_ai-automation received a trust score of 95/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via RUBE_REMOTE_WORKBENCH.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 17, 2026 (commit 99e2a295). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Command Injection via RUBE_REMOTE_WORKBENCH The skill documentation recommends using `RUBE_REMOTE_WORKBENCH` for bulk operations, explicitly mentioning `run_composio_tool()` in a loop with `ThreadPoolExecutor`. This strongly suggests that `RUBE_REMOTE_WORKBENCH` is a code execution environment. An LLM, if prompted maliciously, could be instructed to inject arbitrary code into this workbench environment via the arguments passed to `run_composio_tool()` or the overall workbench execution context, leading to command injection. The skill itself does not implement this, but guides the LLM to use a tool with this capability. Clarify the security boundaries and sandboxing of `RUBE_REMOTE_WORKBENCH`. If it's a general-purpose code execution environment, explicitly warn users about the risks of executing untrusted code. Implement strict input validation and sanitization for any code or commands passed to `RUBE_REMOTE_WORKBENCH` to prevent arbitrary code execution. | LLM | SKILL.md:84 |
Scan History
Embed Code
[](https://skillshield.io/report/dd5dff29aaab1785)
Powered by SkillShield