Trust Assessment
image-utils received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 3 high, 1 medium, and 0 low severity. Key findings include Suspicious import: requests, Arbitrary File Read via ImageUtils.load, Arbitrary File Write via ImageUtils.save.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary File Read via ImageUtils.load The `ImageUtils.load` method directly uses a string `source` argument as a file path if it's not a URL or base64 string. If an attacker can control this `source` argument (e.g., through a malicious prompt to the AI agent), they can read arbitrary files on the system, leading to data exfiltration. Implement strict input validation for the `source` argument to `ImageUtils.load`. Only allow loading from a predefined set of safe directories or explicitly validate that the path is within an allowed sandbox. Avoid directly passing untrusted strings to file system operations. | LLM | references/code-examples/image_utils.py:49 | |
| HIGH | Arbitrary File Write via ImageUtils.save The `ImageUtils.save` method directly uses a `path` argument to save an image. If an attacker can control this `path` argument (e.g., through a malicious prompt to the AI agent), they can write files to arbitrary locations on the file system, potentially overwriting critical system files or writing malicious content. This could lead to command injection (e.g., RCE if a web shell is written) or data exfiltration (e.g., overwriting logs or configuration files). Implement strict input validation and sanitization for the `path` argument to `ImageUtils.save`. Restrict file saving to a predefined, sandboxed output directory. Do not allow arbitrary file paths from untrusted input. | LLM | references/code-examples/image_utils.py:100 | |
| HIGH | Server-Side Request Forgery (SSRF) via ImageUtils.load_from_url The `ImageUtils.load_from_url` method performs an HTTP GET request to a `url` argument using `requests.get`. If an attacker can control this `url` argument (e.g., through a malicious prompt to the AI agent), they can force the agent to make requests to internal network resources (e.g., `http://localhost:8080/admin`, cloud metadata endpoints like `http://169.254.169.254/latest/meta-data/`) or other external services. This could lead to information disclosure, interaction with internal services, or denial of service by downloading large files. Implement strict URL validation for the `url` argument to `ImageUtils.load_from_url`. Whitelist allowed domains or use a robust URL parser to prevent access to internal IP ranges (e.g., 127.0.0.1, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) and sensitive endpoints (e.g., 169.254.169.254). | LLM | references/code-examples/image_utils.py:68 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/galbria/image-utils/references/code-examples/image_utils.py:26 |
Scan History
Embed Code
[](https://skillshield.io/report/950ba71d7f59eeec)
Powered by SkillShield