Trust Assessment
gmail-client-PM received a trust score of 89/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 2 medium, and 0 low severity. Key findings include IMAP Protocol Injection via email_id in cmd_read, Unrestricted Email Sending Allows Data Exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | IMAP Protocol Injection via email_id in cmd_read The `email_id` argument, which is user-controlled via command-line input, is directly passed to `imaplib.IMAP4_SSL.fetch()`. If a malicious `email_id` string containing IMAP commands or special characters is provided (e.g., '1) UID FETCH 1 (BODY[])'), it could lead to IMAP protocol injection. This could allow a compromised LLM to manipulate the IMAP session beyond fetching a single email by ID, potentially leading to information disclosure or unexpected server behavior. Validate the `email_id` input to ensure it is a valid integer or a safe, expected format for IMAP message IDs before passing it to `mail.fetch()`. For example, add a check like `if not email_id.isdigit(): raise ValueError("Invalid email ID")`. | LLM | scripts/gmail_tool.py:79 | |
| MEDIUM | Unrestricted Email Sending Allows Data Exfiltration The `cmd_send` function allows the LLM to specify an arbitrary recipient (`to`) and email body (`body`). A compromised or malicious LLM could leverage this functionality to exfiltrate sensitive information (e.g., user data, internal system details, or other skill outputs) to an attacker-controlled email address. While sending emails is the skill's primary purpose, the unrestricted nature of the `to` field poses a significant exfiltration risk. Implement policies at the agent/LLM orchestration layer to restrict the `to` addresses (e.g., to a whitelist of trusted domains or requiring explicit user confirmation for external addresses). Alternatively, the skill itself could be modified to enforce such restrictions if feasible and desired for the specific use case. | LLM | scripts/gmail_tool.py:108 |
Scan History
Embed Code
[](https://skillshield.io/report/265366effdefa434)
Powered by SkillShield