Trust Assessment
gemini-image-proxy received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Unrestricted File System Access via User Input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unrestricted File System Access via User Input The script directly uses user-provided command-line arguments for file paths (`input_image_path` and `output_path`) without validation or sandboxing. A malicious actor could exploit this to read arbitrary files from the system (e.g., `/etc/passwd`, `/app/secrets.txt`) and potentially exfiltrate their content via the image editing API, or write/overwrite arbitrary files (e.g., `/etc/cron.d/malicious_job`, `/root/.ssh/authorized_keys`) leading to command injection or system compromise. This poses a significant risk if the skill runs with broad file system permissions. Implement strict path validation to ensure `input_image_path` and `output_path` are within an allowed, sandboxed directory. Prevent path traversal sequences (e.g., `../`). If possible, use a file picker UI instead of raw string arguments for file paths. Ensure the skill runs with the principle of least privilege, restricting its file system access to only necessary directories. | LLM | scripts/generate.py:40 | |
| HIGH | Unrestricted File System Access via User Input The script directly uses user-provided command-line arguments for file paths (`input_image_path` and `output_path`) without validation or sandboxing. A malicious actor could exploit this to read arbitrary files from the system (e.g., `/etc/passwd`, `/app/secrets.txt`) and potentially exfiltrate their content via the image editing API, or write/overwrite arbitrary files (e.g., `/etc/cron.d/malicious_job`, `/root/.ssh/authorized_keys`) leading to command injection or system compromise. This poses a significant risk if the skill runs with broad file system permissions. Implement strict path validation to ensure `input_image_path` and `output_path` are within an allowed, sandboxed directory. Prevent path traversal sequences (e.g., `../`). If possible, use a file picker UI instead of raw string arguments for file paths. Ensure the skill runs with the principle of least privilege, restricting its file system access to only necessary directories. | LLM | scripts/generate.py:64 |
Scan History
Embed Code
[](https://skillshield.io/report/50a25e63600cdf98)
Powered by SkillShield