Trust Assessment
google-maps-api-skill received a trust score of 51/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 2 critical, 1 high, 3 medium, and 0 low severity. Key findings include Missing required field: name, Suspicious import: requests, Agent instructed to output specific text and modify behavior.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 18/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Agent instructed to output specific text and modify behavior The skill's `SKILL.md` contains explicit instructions for the host LLM (agent) to output specific text and modify its operational behavior. For example, 'Agent must inform the user' and 'Agent Note' sections dictate the LLM's responses and actions. This is a direct attempt to manipulate the LLM's responses and internal logic, which constitutes a prompt injection. Remove direct instructions to the agent from the skill description. Instead, define tool outputs and let the LLM decide how to present information or handle errors based on its own system prompt and context. | LLM | SKILL.md:26 | |
| CRITICAL | User-controlled input directly used in shell command execution The `SKILL.md` provides an example command for the agent to execute: `python -u ./scripts/google_maps_api.py "keywords" "language" "country"`. The parameters `keywords`, `language`, and `country` are user-controlled. If the agent substitutes user input directly into this command string without proper shell escaping or quoting, a malicious user could inject arbitrary shell commands (e.g., `"; rm -rf /"`), leading to command injection. When executing shell commands with user-controlled arguments, ensure all arguments are properly escaped or quoted for the target shell. Ideally, use a more robust execution mechanism that separates command and arguments, preventing shell interpretation of arguments (e.g., `subprocess.run(['python', script, arg1, arg2])` in Python, or equivalent in other environments). | LLM | SKILL.md:43 | |
| HIGH | Instruction to solicit API key directly in chat The skill explicitly instructs the agent to ask the user to provide their `BROWSERACT_API_KEY` directly 'in this chat' if it's not configured. This practice is insecure as it exposes sensitive credentials in the conversation history, which can be logged, reviewed, or potentially exfiltrated. Never ask users to provide API keys or other sensitive credentials directly in chat. Instead, instruct them to set environment variables, use a secure secrets management system, or provide it through a dedicated, secure input mechanism. | LLM | SKILL.md:28 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/phheng/google-maps-api-skill/SKILL.md:1 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/phheng/google-maps-api-skill/scripts/google_maps_api.py:3 | |
| MEDIUM | Script may dump raw API response containing unparsed data In `scripts/google_maps_api.py`, if the `result_string` (parsed output) is empty, the script falls back to dumping the entire `task_info` JSON object using `json.dumps(task_info, ensure_ascii=False)`. This `task_info` object could potentially contain more data than the explicitly listed 'Output Data' fields, including sensitive or internal API details not intended for the end-user or the LLM's context. Ensure that only explicitly intended and sanitized data is returned to the user or the LLM. If `result_string` is empty, consider returning a structured error message or a predefined subset of `task_info` rather than the entire raw response. | LLM | scripts/google_maps_api.py:80 |
Scan History
Embed Code
[](https://skillshield.io/report/4e4dfb6e8002c808)
Powered by SkillShield