Trust Assessment
image received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include Missing required field: name, Command Injection via Unsanitized User Prompt in Bash, Command Injection via Unsanitized User Prompt in Bash (Imagen 4).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 15, 2026 (commit 1823c3f6). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via Unsanitized User Prompt in Bash The `PROMPT` variable, which is derived from user input, is directly interpolated into the JSON payload of `curl -d` commands without proper shell escaping. This allows an attacker to inject arbitrary shell commands by crafting a malicious prompt that breaks out of the JSON string and executes code within the shell context. This vulnerability exists for both Gemini Native (Tier 1) and Imagen 4 (Tier 3) API calls. An attacker could use this to exfiltrate sensitive data (like API keys), execute arbitrary commands on the host system, or cause a denial of service. The `PROMPT` variable must be robustly escaped for both shell interpretation and JSON formatting before being embedded in the `curl` command. Consider using a dedicated JSON parsing tool like `jq` to construct the payload, or pass the prompt via a temporary file, or use a Python script to make the API call with proper string escaping. For example, to prevent shell injection, use `printf %q` or pass the prompt via stdin to `curl -d @-`. | LLM | SKILL.md:68 | |
| CRITICAL | Command Injection via Unsanitized User Prompt in Bash (Imagen 4) Similar to the Gemini Native API call, the `PROMPT` variable is directly interpolated into the JSON payload for the Imagen 4 API call without proper shell escaping. This creates a critical command injection vulnerability, allowing an attacker to execute arbitrary shell commands by manipulating the prompt. This could lead to data exfiltration, arbitrary code execution, or denial of service. The `PROMPT` variable must be robustly escaped for both shell interpretation and JSON formatting before being embedded in the `curl` command. Consider using a dedicated JSON parsing tool like `jq` to construct the payload, or pass the prompt via a temporary file, or use a Python script to make the API call with proper string escaping. For example, to prevent shell injection, use `printf %q` or pass the prompt via stdin to `curl -d @-`. | LLM | SKILL.md:169 | |
| HIGH | Potential Credential Exfiltration via Command Injection While API keys (`GEMINI_API_KEY`, `POLLINATIONS_API_KEY`) are intended to be used for legitimate API calls, the presence of command injection vulnerabilities (SS-LLM-003) means an attacker could redirect `curl` to a malicious server or execute commands to read and exfiltrate these sensitive environment variables. This risk is a direct consequence of the command injection flaws. Address the underlying command injection vulnerabilities (SS-LLM-003) to prevent an attacker from manipulating `curl` or executing arbitrary commands that could lead to credential exfiltration. | LLM | SKILL.md:66 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | plugins/specweave-media/skills/image/SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/bbe24a5dbb1b27a7)
Powered by SkillShield