Security Audit
short-io-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
short-io-automation received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Potential Prompt Injection via RUBE_SEARCH_TOOLS 'use_case'.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Prompt Injection via RUBE_SEARCH_TOOLS 'use_case' The skill instructs the host LLM to use `RUBE_SEARCH_TOOLS` with a `use_case` parameter (e.g., `queries: [{use_case: "your specific Short IO task"}]`). If the host LLM populates this `use_case` directly from untrusted user input, and `RUBE_SEARCH_TOOLS` internally uses an LLM or interprets this string in a flexible manner, a malicious user could inject instructions. This could manipulate the behavior of `RUBE_SEARCH_TOOLS` or lead to unintended actions by the overall agent, as the host LLM is manipulated into passing these instructions downstream. Implement robust input sanitization and validation for the `use_case` parameter before passing it to `RUBE_SEARCH_TOOLS`. If `RUBE_SEARCH_TOOLS` is LLM-powered, consider using a separate, sandboxed LLM call for interpreting user intent for `use_case` or restrict its capabilities to prevent instruction following from untrusted input. | LLM | SKILL.md:39 |
Scan History
Embed Code
[](https://skillshield.io/report/93c6956d1f196db0)
Powered by SkillShield