Trust Assessment
google-maps received a trust score of 43/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Dangerous tool allowed: exec, Suspicious import: requests, Skill declares 'exec' permission.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | User-controlled input passed to 'exec' tool via command line arguments The skill's `SKILL.md` documentation provides examples of calling `python3 lib/map_helper.py` with user-controlled strings (e.g., 'origin', 'destination', and various options) directly as command-line arguments. Given the declared `exec` permission, if the host LLM constructs the command string by directly concatenating user input without proper shell escaping, a malicious user could inject shell metacharacters (e.g., `;`, `|`, `&&`, `$()`) to execute arbitrary commands on the host system. This is a direct command injection vulnerability at the interface between the LLM and the `exec` tool. The LLM must strictly sanitize and shell-escape all user-provided arguments before constructing the command string for the `exec` tool. For Python scripts, this typically means using `shlex.quote()` for each argument if constructing a shell command, or passing arguments as a list to `subprocess.run` (if the skill were to be called directly by the LLM without an intermediate shell). Alternatively, the skill should use a more constrained tool interface that does not rely on raw shell execution for user input. | LLM | SKILL.md:39 | |
| HIGH | Dangerous tool allowed: exec The skill allows the 'exec' tool without constraints. This grants arbitrary command execution. Remove unconstrained shell/exec tools from allowed-tools, or add specific command constraints. | Static | skills/shaharsha/google-maps/SKILL.md:1 | |
| HIGH | Skill declares 'exec' permission The skill explicitly declares the `exec` permission in its manifest. This allows the host LLM to execute arbitrary shell commands, which is a highly privileged operation. While necessary for some skills, it significantly increases the attack surface for command injection if not handled with extreme care by the LLM and the skill's argument parsing. This permission grants the ability to run any command on the underlying system. Re-evaluate if the `exec` permission is strictly necessary for the skill's functionality. If so, ensure that the LLM is rigorously trained and constrained to sanitize and escape all user-provided arguments before constructing any command string for execution. Consider using a more constrained execution environment or a tool interface that does not rely on raw shell execution. | LLM | SKILL.md | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/shaharsha/google-maps/lib/map_helper.py:28 |
Scan History
Embed Code
[](https://skillshield.io/report/5ccf67471a54a9d3)
Powered by SkillShield