Security Audit
canny-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
canny-automation received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential Prompt Injection via Tool Parameters, Data Exfiltration Risk via Arbitrary Tool Execution, Excessive Permissions Enabled by Broad Tool Access.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Prompt Injection via Tool Parameters The skill instructs the LLM to use `RUBE_SEARCH_TOOLS` with open-ended string parameters like `use_case` and `known_fields`. A malicious user could craft these strings to attempt prompt injection against the Rube MCP's underlying LLM or the host LLM that processes these parameters, potentially manipulating its behavior or extracting sensitive information. Similarly, the `arguments` parameter for `RUBE_MULTI_EXECUTE_TOOL`, if controlled by a malicious prompt, could be used to inject harmful instructions. Implement strict input validation and sanitization for all user-provided strings passed to tool parameters, especially those that might be interpreted by an LLM. Consider using allow-lists for `use_case` or `known_fields` where possible, or ensure the Rube MCP itself has robust prompt injection defenses for these inputs. | LLM | SKILL.md:30 | |
| HIGH | Data Exfiltration Risk via Arbitrary Tool Execution The `RUBE_MULTI_EXECUTE_TOOL` allows the execution of any discovered Canny tool. If a malicious prompt can control the `tool_slug` and `arguments`, it could instruct the skill to call Canny API methods that read sensitive data (e.g., private feedback, user details) and then exfiltrate this data through the LLM's response. The skill's instructions do not include mechanisms to restrict or audit the types of Canny operations that can be performed. Implement a policy to restrict the `tool_slug` to an allow-list of safe operations, or introduce a human-in-the-loop approval process for sensitive Canny operations. Ensure that the LLM's responses are filtered to prevent the unintentional or malicious exfiltration of sensitive data obtained from tool outputs. | LLM | SKILL.md:48 | |
| MEDIUM | Excessive Permissions Enabled by Broad Tool Access The skill encourages the use of `RUBE_MULTI_EXECUTE_TOOL` to perform 'Canny operations' generally, without specifying any scope limitations for the underlying Canny connection. If the Canny connection configured within Rube MCP has broad permissions (e.g., admin access), then the skill effectively grants the LLM (and by extension, a potentially malicious user) those broad permissions, increasing the impact of any compromise. Advise users to configure the Canny connection in Rube MCP with the principle of least privilege, granting only the necessary permissions for the intended use cases. The skill documentation should explicitly warn about the implications of broad permissions and recommend scoping connections appropriately. | LLM | SKILL.md:48 |
Scan History
Embed Code
[](https://skillshield.io/report/d2ed737dead18e04)
Powered by SkillShield