Trust Assessment
gmail-client-PM received a trust score of 81/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 1 medium, and 1 low severity. Key findings include Potential Command Injection via User Input, Inherent Data Exfiltration Capability, Sensitive Credentials Required.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User Input The skill's usage examples in `SKILL.md` show user-controlled arguments (`<EMAIL_ID>`, `<TO>`, `<SUBJECT>`, `<BODY>`) being passed directly to a Python script (`gmail_tool.py`) via shell commands. If the underlying script does not properly sanitize these inputs before use (e.g., when constructing internal shell commands or database queries), it could lead to command injection, allowing an attacker to execute arbitrary shell commands or manipulate the script's behavior. Review the `skills/gmail-client/scripts/gmail_tool.py` script to ensure all user-provided arguments are thoroughly sanitized and escaped before being used in any shell commands or subprocess calls. When executing external commands, prefer using `subprocess.run` with `shell=False` and passing arguments as a list to prevent shell injection. | LLM | SKILL.md:28 | |
| MEDIUM | Inherent Data Exfiltration Capability The skill's primary function is to send emails, including user-provided subject and body content, to specified recipients. This capability inherently allows for data exfiltration if the host LLM is prompted to send sensitive information (e.g., conversation history, internal data, or files) to an unauthorized external email address. While not a vulnerability in the documentation itself, it highlights a significant risk if the LLM's guardrails or user interaction mechanisms are insufficient. Implement robust guardrails within the host LLM to prevent the skill from being used to send sensitive or unauthorized data. Monitor email sending activity for unusual patterns or recipients. Consider restricting the 'send' functionality to trusted domains or requiring explicit user confirmation for external emails, especially when sensitive data might be involved. | LLM | SKILL.md:28 | |
| LOW | Sensitive Credentials Required The skill requires `GMAIL_USER` and `GMAIL_PASS` (App Password) to be set as environment variables for authentication. The presence of these sensitive credentials makes the skill a potential target for credential harvesting if the underlying script is compromised or if the environment variables are not securely managed. While the `SKILL.md` itself doesn't harvest credentials, it highlights their necessity and presence in the execution environment. Ensure that environment variables containing credentials are managed securely (e.g., using secrets management systems) and are not exposed unnecessarily. The underlying script `gmail_tool.py` must handle these credentials with utmost care, avoiding logging them or exposing them in any way. Implement least privilege access for the skill's execution environment. | LLM | SKILL.md:9 |
Scan History
Embed Code
[](https://skillshield.io/report/5d01a6dff4f32ade)
Powered by SkillShield