Trust Assessment
google-search received a trust score of 64/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Dangerous tool allowed: exec, Suspicious import: urllib.request, Prompt Injection via User Query to Downstream LLM.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Dangerous tool allowed: exec The skill allows the 'exec' tool without constraints. This grants arbitrary command execution. Remove unconstrained shell/exec tools from allowed-tools, or add specific command constraints. | Static | skills/phucanh08/google-search-grounding-3/SKILL.md:1 | |
| HIGH | Prompt Injection via User Query to Downstream LLM The skill directly incorporates user-provided 'query' into the prompt sent to the Gemini LLM (via `client.models.generate_content`). A malicious user could craft a query containing instructions that manipulate the Gemini model's behavior, leading to unintended responses, generation of harmful content, or potential disclosure of internal LLM prompts/configurations. While this skill's primary function is to interact with an LLM, the direct embedding of untrusted user input into the LLM's prompt without robust instruction-following safeguards creates a prompt injection vulnerability against the target Gemini model. Implement robust prompt engineering techniques to clearly delineate user input from system instructions. Consider using specific delimiters or a dedicated 'user_message' field if the Gemini API supports it, to ensure user input is interpreted solely as content and not as instructions. This helps prevent the user's query from overriding or manipulating the intended behavior of the Gemini model. | LLM | lib/google_search.py:114 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/phucanh08/google-search-grounding-3/lib/google_search.py:24 |
Scan History
Embed Code
[](https://skillshield.io/report/766c551076c9a972)
Powered by SkillShield