Security Audit
tomtom-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
tomtom-automation received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned Rube MCP dependency, Broad tool execution capability via Rube MCP.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 17, 2026 (commit 99e2a295). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned Rube MCP dependency The skill's manifest specifies a dependency on the 'rube' MCP without a version constraint. This means the skill will always use the latest version of the 'rube' MCP, which could introduce breaking changes, vulnerabilities, or even malicious code without explicit review or update to this skill. This poses a significant supply chain risk. Pin the 'rube' MCP dependency to a specific, known-good version to ensure stability and security. Regularly review and update the pinned version to mitigate supply chain risks. | LLM | Manifest:1 | |
| MEDIUM | Broad tool execution capability via Rube MCP The skill instructs the LLM to use `RUBE_MULTI_EXECUTE_TOOL` and `RUBE_REMOTE_WORKBENCH` to perform Tomtom operations. These tools allow for the execution of any operation exposed by the Rube MCP's Tomtom toolkit, as discovered via `RUBE_SEARCH_TOOLS`. While this is the intended functionality for automation, it grants the LLM broad and potentially unconstrained access to Tomtom functionalities. A compromised or misaligned LLM could leverage this extensive access to perform unauthorized or destructive actions. Implement robust guardrails and access controls around the LLM's ability to invoke `RUBE_MULTI_EXECUTE_TOOL` and `RUBE_REMOTE_WORKBENCH`. Ensure that the LLM's prompts and internal reasoning are carefully designed to limit actions to only those explicitly requested by the user and within defined safety boundaries. Consider implementing human approval steps for sensitive operations. | LLM | SKILL.md:50 |
Scan History
Embed Code
[](https://skillshield.io/report/c8226ff1e8bbf3a8)
Powered by SkillShield