Trust Assessment
gemini-deep-research received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Suspicious import: requests, Prompt Injection via output_format parameter, Arbitrary File Write via output_dir argument.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 5acc5677). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via output_format parameter The `output_format` parameter, which can be controlled by the LLM via the `--format` argument, is directly interpolated into the prompt sent to the Gemini API. This allows an attacker to inject arbitrary instructions into the Gemini model, potentially leading to unintended behavior, data manipulation, or generation of harmful content. Implement strict sanitization or validation for `output_format` to ensure it only contains allowed formatting instructions and does not include executable commands or prompt injection attempts. Ideally, use a structured way to pass formatting instructions to the API if available, rather than direct string interpolation into the main query. | LLM | scripts/deep_research.py:35 | |
| HIGH | Arbitrary File Write via output_dir argument The `output_dir` argument, which defaults to the current directory but can be specified by the LLM, is used to determine where the research report and metadata files are saved. An attacker could manipulate this argument to specify an arbitrary file path, potentially overwriting critical system files, writing to sensitive directories, or filling up disk space. Restrict `output_dir` to a safe, sandboxed directory (e.g., a temporary directory or a subdirectory within the skill's own workspace). Do not allow arbitrary paths. If user-specified output directories are necessary, validate them rigorously to ensure they are within an allowed scope and do not contain path traversal sequences (e.g., `../`). | LLM | scripts/deep_research.py:130 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/arun-8687/gemini-deep-research/scripts/deep_research.py:14 | |
| MEDIUM | Potential Data Exfiltration/Excessive Permissions via file_search_store The `file_search_store` parameter, which can be controlled by the LLM, is passed directly to the Gemini API as a 'file search store name'. If the Gemini API interprets this name in a way that allows referencing unauthorized or sensitive data stores (e.g., by name or by a path-like structure), it could lead to unauthorized data access or exfiltration. Ensure that the Gemini API only allows `file_search_store_names` that are pre-configured and authorized for the specific agent. If the skill allows dynamic creation or referencing of stores, implement strict validation to prevent access to unintended data sources. The skill developer should confirm with the Gemini API documentation how `file_search_store_names` are resolved and what security implications exist. | LLM | scripts/deep_research.py:40 |
Scan History
Embed Code
[](https://skillshield.io/report/00aa16a9ed1af547)
Powered by SkillShield