Security Audit
OpenAI Automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
OpenAI Automation received a trust score of 68/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Reliance on external MCP server introduces supply chain risk, OpenAI API key entrusted to third-party service, Potential for prompt injection against OpenAI models.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 17, 2026 (commit 99e2a295). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Reliance on external MCP server introduces supply chain risk The skill explicitly requires and instructs users to connect to the Composio MCP server at `https://rube.app/mcp`. The security and integrity of this third-party server are critical to the overall security of the skill. A compromise of `rube.app/mcp` could lead to unauthorized access to OpenAI API keys, data exfiltration, or manipulation of tool execution through the provided OpenAI tools. Users should be aware that the security of this skill is dependent on the Composio MCP server. Skill developers should ensure robust security practices for their MCP infrastructure and provide transparency regarding its security posture. Consider providing options for self-hosting or alternative, verifiable MCP implementations. | LLM | SKILL.md:16 | |
| HIGH | OpenAI API key entrusted to third-party service The setup instructions indicate that users will connect their OpenAI account via API key authentication to the Composio MCP server (`rube.app/mcp`). This means the user's sensitive OpenAI API key will be transmitted to and stored by a third-party service. If this service is compromised, the API key could be harvested and misused, leading to unauthorized access to the user's OpenAI account and potential billing abuse. Users should exercise caution when entrusting API keys to third-party services. Skill developers should clearly document how API keys are handled, stored, and protected by the Composio MCP server, including encryption, access controls, and compliance certifications. Consider implementing OAuth or other token-based authentication mechanisms where possible, instead of direct API key submission. | LLM | SKILL.md:17 | |
| MEDIUM | Potential for prompt injection against OpenAI models The `OPENAI_CREATE_RESPONSE`, `OPENAI_CREATE_EMBEDDINGS`, and `OPENAI_CREATE_IMAGE` tools expose direct `input` and `prompt` parameters that accept arbitrary strings. If the host LLM passes untrusted user input directly into these parameters without proper sanitization or validation, a malicious user could craft inputs to manipulate the behavior of the underlying OpenAI models (e.g., generating unintended content, bypassing safety filters, or attempting to extract information if the model has access to sensitive context). The host LLM integrating this skill must implement robust input sanitization, validation, and potentially content moderation for all user-provided inputs before passing them to the `input` or `prompt` parameters of these OpenAI tools. Consider using a dedicated prompt injection defense layer to mitigate this risk. | LLM | SKILL.md:27 |
Scan History
Embed Code
[](https://skillshield.io/report/ceee8331c291a69d)
Powered by SkillShield