Trust Assessment
seedream-imagegen received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Suspicious import: urllib.request, Arbitrary local file read and upload to external API, Arbitrary file write to filesystem.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary local file read and upload to external API The `scripts/generate_image.py` script allows reading arbitrary local files specified via the `--images` command-line argument. These files are then base64 encoded and sent as part of the request payload to the external Volcengine Ark API. An attacker could instruct the LLM to pass paths to sensitive files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`) to the `--images` argument, leading to their exfiltration to the third-party API. Implement strict validation for image paths. Only allow files from a designated, sandboxed directory (e.g., `/mnt/data/`). Alternatively, require all reference images to be provided as URLs, or implement a secure file upload mechanism that validates file types and content. | LLM | scripts/generate_image.py:99 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/wilsonliu95/seedream-imagegen/scripts/generate_image.py:12 | |
| MEDIUM | Arbitrary file write to filesystem The `scripts/generate_image.py` script allows specifying an arbitrary output directory via the `--output` argument. The script will create this directory if it doesn't exist and download generated images into it. This allows an attacker to write files to potentially sensitive locations on the agent's filesystem, which could lead to denial-of-service or other local privilege escalation if combined with other vulnerabilities. Restrict the `--output` directory to a designated, sandboxed location (e.g., `/mnt/data/outputs/`) that is managed by the agent and has appropriate permissions. Do not allow arbitrary paths from user input. | LLM | scripts/generate_image.py:130 |
Scan History
Embed Code
[](https://skillshield.io/report/ff28dd1c1dc982b5)
Powered by SkillShield