Security Audit
GroqCloud Automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
GroqCloud Automation received a trust score of 68/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Local file read via audio translation tool, Server-Side Request Forgery (SSRF) and data exfiltration via audio translation tool, Potential for prompt injection into GroqCloud chat completion.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Local file read via audio translation tool The `GROQCLOUD_GROQ_CREATE_AUDIO_TRANSLATION` tool explicitly allows the `file_path` parameter to be a 'Local path'. This grants the agent the capability to read arbitrary files from the local filesystem. An attacker could instruct the agent to read sensitive files (e.g., configuration files, private keys, system files) and then potentially exfiltrate their content by having them translated and returned, or used in subsequent prompts. Restrict the `file_path` parameter to only accept `base64 data URL` or enforce strict allow-listing and sandboxing for local paths. If local file access is absolutely necessary, implement robust validation and access controls to prevent reading outside designated, secure directories. | LLM | SKILL.md:79 | |
| HIGH | Server-Side Request Forgery (SSRF) and data exfiltration via audio translation tool The `GROQCLOUD_GROQ_CREATE_AUDIO_TRANSLATION` tool explicitly allows the `file_path` parameter to be an 'HTTP(S) URL'. This capability could be exploited for Server-Side Request Forgery (SSRF) attacks, enabling the agent to make requests to internal network resources or to external attacker-controlled servers. This could lead to information disclosure, port scanning, or data exfiltration from the agent's environment. Implement strict URL validation to prevent access to internal networks and restrict requests to trusted external domains. Consider proxying all external requests through a controlled service that can enforce security policies and prevent SSRF. | LLM | SKILL.md:79 | |
| MEDIUM | Potential for prompt injection into GroqCloud chat completion The `GROQCLOUD_GROQ_CREATE_CHAT_COMPLETION` tool accepts a `messages` array, where the `content` field is typically populated by user input. If this user-provided content is not properly sanitized or validated before being passed to the GroqCloud model, an attacker could inject malicious instructions. This could manipulate the GroqCloud model's behavior, extract sensitive information from its context, or generate undesirable outputs, effectively performing a prompt injection attack against the underlying GroqCloud LLM. Implement robust input sanitization and validation for all user-provided content before it is passed to the `messages` parameter of the chat completion tool. Consider using a separate, sandboxed LLM call for untrusted input or applying content filters to detect and mitigate malicious prompts. | LLM | SKILL.md:50 |
Scan History
Embed Code
[](https://skillshield.io/report/c32a0c5b1c81a0cf)
Powered by SkillShield