Security Audit
fidel-api-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
fidel-api-automation received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Potential Prompt Injection via RUBE_SEARCH_TOOLS `use_case` parameter.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Prompt Injection via RUBE_SEARCH_TOOLS `use_case` parameter The skill instructs the host LLM to call `RUBE_SEARCH_TOOLS` with a `use_case` parameter, which is described as 'your specific Fidel API task'. If the Rube MCP system uses an internal LLM to interpret this `use_case` string, a malicious input from the host LLM could lead to prompt injection against Rube MCP's internal LLM. This could manipulate Rube MCP's behavior, potentially leading to unintended tool selections, information disclosure from Rube MCP's context, or other undesirable actions. Implement robust input validation and sanitization for the `use_case` parameter within the Rube MCP system. If Rube MCP is LLM-driven, consider using techniques like prompt templating, input filtering, or a separate safety LLM to prevent malicious instructions from being passed to the core LLM. The skill documentation should also advise users against providing arbitrary or untrusted input to this parameter. | LLM | SKILL.md:38 |
Scan History
Embed Code
[](https://skillshield.io/report/b52ccb9d70b8b29f)
Powered by SkillShield