Trust Assessment
u301-automation received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 2 medium, and 0 low severity. Key findings include Potential Prompt Injection via Dynamic Tool Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Prompt Injection via Dynamic Tool Arguments The skill instructs the host LLM to construct tool calls (`RUBE_SEARCH_TOOLS`, `RUBE_MULTI_EXECUTE_TOOL`) where arguments like `use_case` and `arguments` are dynamically filled. If these dynamic arguments are derived directly from untrusted user input without proper sanitization or validation by the host LLM, a malicious user could inject instructions or data into the Rube MCP or the underlying U301 system. This could lead to unintended actions, data exposure, or manipulation of the tool's behavior. Instruct the LLM to sanitize or validate user-provided input before populating `use_case` and `arguments` fields. Explicitly state that these fields should not directly accept raw untrusted input. If possible, use structured data or enumerated options instead of free-form text for sensitive parameters. | LLM | SKILL.md:47 | |
| MEDIUM | Potential Prompt Injection via Dynamic Tool Arguments The skill instructs the host LLM to construct tool calls (`RUBE_SEARCH_TOOLS`, `RUBE_MULTI_EXECUTE_TOOL`) where arguments like `use_case` and `arguments` are dynamically filled. If these dynamic arguments are derived directly from untrusted user input without proper sanitization or validation by the host LLM, a malicious user could inject instructions or data into the Rube MCP or the underlying U301 system. This could lead to unintended actions, data exposure, or manipulation of the tool's behavior. Instruct the LLM to sanitize or validate user-provided input before populating `use_case` and `arguments` fields. Explicitly state that these fields should not directly accept raw untrusted input. If possible, use structured data or enumerated options instead of free-form text for sensitive parameters. | LLM | SKILL.md:59 |
Scan History
Embed Code
[](https://skillshield.io/report/232111afdfdb05b7)
Powered by SkillShield