Security Audit
affinity-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
affinity-automation received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Broad access to external system operations, Potential prompt injection via 'use_case' parameter.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 17, 2026 (commit 99e2a295). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Broad access to external system operations The skill grants the LLM access to a wide range of operations within the Affinity system via Rube MCP's `RUBE_MULTI_EXECUTE_TOOL` and `RUBE_REMOTE_WORKBENCH`. These tools allow for dynamic discovery and execution of any available Affinity operation, effectively giving the LLM the same permissions as the connected Affinity account. This broad access, while intended for automation, poses a significant risk if the LLM is compromised or manipulated, as it could lead to unauthorized data access, modification, or deletion within Affinity. Implement fine-grained access control within the Rube/Composio platform for the Affinity toolkit. Ensure that the connection used by the LLM has the minimum necessary permissions (least privilege principle) to perform its intended tasks. Regularly review and audit the actions performed by the LLM through this skill. | LLM | SKILL.md:50 | |
| MEDIUM | Potential prompt injection via 'use_case' parameter The skill instructs the LLM to use `RUBE_SEARCH_TOOLS` with a `use_case` parameter, which is expected to contain natural language describing an Affinity task. If the Rube MCP backend processes this `use_case` using its own LLM, and if the `use_case` string is derived directly from untrusted user input without proper sanitization, it could lead to a prompt injection attack against the Rube backend's LLM. A malicious user could craft a `use_case` to manipulate the Rube system's behavior or extract information. Ensure that any natural language input passed to `use_case` in `RUBE_SEARCH_TOOLS` is thoroughly sanitized and validated if it originates from untrusted sources. The Rube MCP backend should implement robust prompt injection defenses for any LLM-powered interpretation of this parameter. Consider restricting the complexity or content of `use_case` if direct LLM interpretation is used. | LLM | SKILL.md:39 |
Scan History
Embed Code
[](https://skillshield.io/report/1ac55c79c18dddb7)
Powered by SkillShield