Security Audit
remote-retrieval-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
remote-retrieval-automation received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential for Malicious Argument Injection via Rube MCP Tools.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 17, 2026 (commit 99e2a295). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential for Malicious Argument Injection via Rube MCP Tools The skill instructs the LLM to interact with Rube MCP tools, specifically `RUBE_SEARCH_TOOLS` and `RUBE_MULTI_EXECUTE_TOOL`. The `queries` field for `RUBE_SEARCH_TOOLS` and the `arguments` field for `RUBE_MULTI_EXECUTE_TOOL` are designed to be populated based on user input. If the LLM directly passes untrusted user input into these fields without robust sanitization or validation, an attacker could inject malicious instructions or data. This creates several credible exploit paths:
1. **Prompt Injection**: Malicious input in `queries` or `arguments` could manipulate the Rube MCP's interpretation of the request, leading to unintended tool selection or behavior.
2. **Data Exfiltration**: If the underlying 'Remote Retrieval' tools accept arguments such as URLs, file paths, or database queries, an attacker could craft inputs to retrieve sensitive data from internal networks or local files (if accessible by the tool) and have the agent return it.
3. **Command Injection**: If any specific 'Remote Retrieval' tool is vulnerable to command injection through its arguments (e.g., executing shell commands based on input), an attacker could execute arbitrary commands on the system hosting that tool.
The skill's 'Known Pitfalls' advise 'Schema compliance' and 'Always search first', which are good practices for the LLM, but do not inherently prevent a compromised LLM from constructing malicious, yet schema-compliant, arguments based on an injected prompt. Implement robust input validation and sanitization for all user-provided data before it is used to construct arguments for Rube MCP tool calls. The LLM should be explicitly instructed to filter, escape, or transform user input to prevent malicious payloads. Consider using allow-lists for argument values where possible, or strictly adhering to expected data types and formats. The agent should also be instructed to never return raw sensitive data retrieved from tools directly to the user without explicit confirmation or redaction. | LLM | SKILL.md:40 |
Scan History
Embed Code
[](https://skillshield.io/report/d5ee5237369742ad)
Powered by SkillShield