Trust Assessment
here-automation received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Broad access to external toolkit operations.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Broad access to external toolkit operations The skill provides the LLM with a generic interface (`RUBE_MULTI_EXECUTE_TOOL`, `RUBE_REMOTE_WORKBENCH`) to discover and execute any operation exposed by the 'Here' toolkit via Rube MCP. This means the LLM can potentially invoke any function, including sensitive or destructive ones, without specific restrictions defined within the skill itself. While this is a design characteristic of a generic API wrapper, it grants the LLM broad capabilities over the 'Here' ecosystem, increasing the risk of unintended actions if the LLM is compromised or misinterprets user intent. Implement granular access controls within the Rube MCP and 'Here' toolkit to restrict the scope of operations available to the LLM. Consider creating more specialized skills that expose only a subset of 'Here' functionalities, or implement LLM-side guardrails to limit the types of operations that can be invoked through this broad interface. | LLM | SKILL.md:50 |
Scan History
Embed Code
[](https://skillshield.io/report/1441a2c83baaca55)
Powered by SkillShield