Security Audit
telnyx-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
telnyx-automation received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Skill enables execution of arbitrary Rube MCP tools.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Skill enables execution of arbitrary Rube MCP tools The skill's core workflow pattern explicitly instructs the LLM to use `RUBE_SEARCH_TOOLS` to discover available tools and subsequently `RUBE_MULTI_EXECUTE_TOOL` to execute any discovered tool by its `tool_slug`. This design grants the AI agent broad permissions to interact with and execute any tool exposed by the Rube MCP, without explicit restrictions on the types or capabilities of tools that can be executed within the skill's definition. If the Rube MCP exposes tools with sensitive capabilities (e.g., filesystem access, arbitrary network requests, system commands), this could lead to privilege escalation, data exfiltration, or unintended actions. The mention of `RUBE_REMOTE_WORKBENCH` with `run_composio_tool()` for 'Bulk ops' further suggests a mechanism for executing arbitrary Composio tools, broadening the potential attack surface. Implement stricter controls or a whitelist for `tool_slug` values that `RUBE_MULTI_EXECUTE_TOOL` can execute within this specific skill. Alternatively, ensure that the Rube MCP only exposes tools with appropriate, least-privilege capabilities to AI agents, or that the agent's execution environment is sandboxed to prevent unintended side effects. | LLM | SKILL.md:49 |
Scan History
Embed Code
[](https://skillshield.io/report/78018814e1a90482)
Powered by SkillShield