Trust Assessment
gemini received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 3 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via unsanitized user input in `gemini` CLI prompt, Skill instructs use of `--approval-mode yolo` for all automated tasks, granting excessive permissions, Potential Prompt Injection into the `gemini` LLM via user-controlled prompts.
The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. The llm_behavioral_safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 11, 2026 (commit 0676c56a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via unsanitized user input in `gemini` CLI prompt The skill instructs the agent to construct and execute `gemini` CLI commands where the prompt string is passed as an argument (e.g., `gemini ... "user prompt"` or `gemini -i "user prompt"`). If the user-provided prompt contains shell metacharacters (e.g., `"; rm -rf /"`), and this input is not properly sanitized or escaped before being passed to the shell, it could lead to arbitrary command execution. The skill does not provide any explicit guidance or mechanism for sanitizing user input before constructing these commands. Implement robust input sanitization and shell escaping for any user-provided strings that are incorporated into shell commands. Consider using a library function that safely escapes arguments for shell execution (e.g., `shlex.quote` in Python) or passing the prompt via a file to avoid direct shell interpretation of user input. | Unknown | SKILL.md:58 | |
| HIGH | Skill instructs use of `--approval-mode yolo` for all automated tasks, granting excessive permissions The skill explicitly instructs the agent to use `--approval-mode yolo` for all background and automated tasks, stating it is "REQUIRED". This flag causes the `gemini` CLI to auto-approve all its internal tools and actions without human intervention. While convenient for automation, this significantly increases the risk of unintended or malicious actions. If the `gemini` CLI is compromised, or if it's prompted to perform harmful actions (e.g., via prompt injection), the `yolo` mode ensures these actions are executed without any safeguard. This grants the `gemini` tool, and indirectly the user controlling its prompt, excessive permissions in an automated context. Although the 'Error Handling' section mentions asking for permission before using high-impact flags, the primary instruction for automated tasks is to 'ALWAYS use --approval-mode yolo', which creates a high-risk default. Re-evaluate the necessity of `--approval-mode yolo` for all automated tasks. If possible, use `--approval-mode auto_edit` or `default` (if an interactive approval mechanism can be integrated) to limit the scope of auto-approved actions. If `yolo` is truly necessary, implement strict sandboxing for the `gemini` CLI when run in `yolo` mode. Additionally, ensure explicit user confirmation via `AskUserQuestion` is *always* required before executing commands with this flag, especially when the prompt is user-controlled, rather than relying on a conditional 'unless already granted' clause. | Unknown | SKILL.md:30 | |
| HIGH | Potential Prompt Injection into the `gemini` LLM via user-controlled prompts The skill instructs the agent to pass user-provided prompts directly to the `gemini` CLI, which is an LLM-based tool. If the `gemini` LLM is susceptible to prompt injection, a malicious user could craft a prompt to manipulate the `gemini` tool's behavior, potentially leading to unintended actions, data disclosure, or even command execution (especially when combined with `--approval-mode yolo`). The skill does not provide any guidance on sanitizing or validating user prompts to prevent prompt injection into the underlying LLM. Implement prompt sanitization and validation techniques before passing user input to the `gemini` LLM. Consider using a 'system prompt' or 'guardrails' approach within the agent's interaction with `gemini` to limit its ability to be manipulated. Educate users about the risks of prompt injection. | Unknown | SKILL.md:58 |
Scan History
Embed Code
[](https://skillshield.io/report/b9c9c2b8528e52cf)
Powered by SkillShield