Trust Assessment
nano-banana-pro received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized Filename/Path Arguments, Potential Command Injection in Preflight File Existence Check.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Unsanitized Filename/Path Arguments The skill instructs the host LLM to construct shell commands (`uv run ...`) where the `--filename` and `--input-image` arguments are populated with values that can originate from user input. If the host LLM does not properly sanitize or escape these user-provided values before embedding them into the shell command string, a malicious user could inject arbitrary shell commands. For example, a filename like `"; rm -rf /; #.png"` could lead to arbitrary code execution. The host LLM must ensure that all user-provided strings used as arguments in shell commands are properly escaped to prevent shell metacharacter interpretation. If the LLM is directly constructing the command string, it must implement robust escaping for all user-controlled parameters. | LLM | SKILL.md:16 | |
| HIGH | Potential Command Injection in Preflight File Existence Check The skill suggests a preflight check using a shell command: `test -f "path/to/input.png"`. If `path/to/input.png` is derived from untrusted user input and is not properly sanitized or escaped before being embedded into this shell command, it could lead to command injection. A malicious user could provide a path like `foo"; rm -rf /; #` to execute arbitrary commands. The host LLM must ensure that any user-provided paths used in shell commands, including preflight checks, are properly escaped to prevent shell metacharacter interpretation. Consider using a safer method for file existence checks that does not involve direct shell command interpolation of untrusted input. | LLM | SKILL.md:60 |
Scan History
Embed Code
[](https://skillshield.io/report/a3379667f4770255)
Powered by SkillShield