Trust Assessment
ollama-x-z-image-turbo received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 14 findings: 4 critical, 3 high, 6 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Arbitrary command execution, Missing required field: name.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 3/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings14
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/eric51/ollama-x-z-image-turbo/SKILL.md:51 | |
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/eric51/ollama-x-z-image-turbo/generate_image.py:10 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/eric51/ollama-x-z-image-turbo/runner.py:90 | |
| CRITICAL | Unsafe `exec` instruction with unvalidated user input The `SKILL.md` instructs the host LLM to execute a shell command using `exec`. The `--prompt "<PROMPT>"` argument directly embeds user-controlled input (`<PROMPT>`) into the shell command. While the example shows quotes around `<PROMPT>`, relying on the host LLM to correctly escape or quote arbitrary user input is a common source of command injection vulnerabilities. A malicious user could craft a prompt that breaks out of the quotes or injects additional shell commands (e.g., `"; rm -rf /"`). The host LLM should use a safe execution mechanism that passes arguments as a list, not a raw shell string, or ensure robust shell escaping of user input before execution. If direct shell execution is unavoidable, implement strict input validation and sanitization for `<PROMPT>` to prevent shell metacharacters. | LLM | SKILL.md:14 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'generate'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/eric51/ollama-x-z-image-turbo/runner.py:90 | |
| HIGH | User-controlled prompt passed directly to `ollama` command In `runner.py`, the `generate` function constructs a command list for `subprocess.run` where the `prompt` (derived from user input) is directly included. While `subprocess.run` with a list prevents shell injection, the `ollama` command itself might be vulnerable to argument injection. If `ollama` interprets certain characters or sequences within the `prompt` as its own command-line options (e.g., `--model`, `--delete`), a malicious user could potentially manipulate the `ollama` command's behavior. For example, a prompt like `"--model evil_model --delete-all-images"` could attempt to change the model or trigger unintended actions within Ollama. Sanitize or validate the `prompt` argument before passing it to the `ollama` command to prevent it from being interpreted as command-line options. Consider using a dedicated API for Ollama if available, rather than shelling out, or ensure Ollama's `run` command is robust against such injections. | LLM | runner.py:100 | |
| HIGH | User-controlled prompt passed directly to Ollama API The `generate_image.py` FastAPI endpoint accepts a `prompt` from the user and directly includes it in the JSON payload sent to the Ollama `/api/generate` endpoint. If the Ollama model or API has specific internal commands, instructions, or vulnerabilities that can be triggered by specially crafted prompts, this could lead to unintended behavior, data leakage, or manipulation of the model's output beyond image generation. Implement prompt sanitization or use a robust prompt templating system that separates user input from system instructions. Consider using a guardrail LLM to filter or rephrase potentially malicious prompts before sending them to Ollama. | LLM | generate_image.py:21 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/eric51/ollama-x-z-image-turbo/test_api_call.py:4 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/eric51/ollama-x-z-image-turbo/SKILL.md:1 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/eric51/ollama-x-z-image-turbo/generate_image.py:3 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/eric51/ollama-x-z-image-turbo/test_api_call.py:1 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/eric51/ollama-x-z-image-turbo/whatsapp_integration.py:1 | |
| MEDIUM | Potential path traversal if `ollama` saves to arbitrary paths The `runner.py` script attempts to detect if `ollama` saves an image directly to a file by parsing its stdout/stderr for "Image saved to: <path>". If `ollama` can be tricked (e.g., via prompt injection) into saving a file to an arbitrary path outside the intended temporary directory (e.g., `/etc/passwd` or `../../../../etc/passwd`), the `_maybe_saved_path` function would detect this path. The script then resolves relative paths against `cwd` (which is `out.parent`) and uses `shutil.copyfile(saved_path, out)`. While `out` is a fixed path, `saved_path` could point to a sensitive file, leading to its content being copied to the `out` path, potentially exfiltrating data or overwriting a known file. Implement strict validation on `saved_path` to ensure it remains within an allowed, sandboxed directory. For example, check that `os.path.commonprefix([allowed_dir, saved_path]) == allowed_dir`. | LLM | runner.py:140 | |
| INFO | User-controlled `image_url` embedded in WhatsApp message The `whatsapp_integration.py` endpoint takes an `image_url` as input and embeds it directly into the message sent to WhatsApp. While the current implementation only sends the URL as text, if the WhatsApp API or the recipient's client were to automatically fetch this URL, a malicious user could provide an internal URL (e.g., `http://localhost:8080/admin`) or a URL pointing to sensitive data on an internal network. This could lead to Server-Side Request Forgery (SSRF) or data exfiltration if the WhatsApp service or client attempts to resolve and fetch the URL. Validate `image_url` to ensure it points to a trusted domain or a public image hosting service. If the skill is intended to host images itself, generate and provide the URL to the hosted image rather than accepting an arbitrary URL from the user. | LLM | whatsapp_integration.py:12 |
Scan History
Embed Code
[](https://skillshield.io/report/5ed376224fd33dd7)
Powered by SkillShield