Trust Assessment
ez-google received a trust score of 28/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 1 critical, 3 high, 2 medium, and 0 low severity. Key findings include Persistence / self-modification instructions, Unsafe deserialization / dynamic eval, Reliance on Third-Party Hosted OAuth Service.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/araa47/ez-google/scripts/auth.py:43 | |
| HIGH | Reliance on Third-Party Hosted OAuth Service The skill's primary authentication flow (`auth.py login`) defaults to using a hardcoded third-party hosted OAuth service at `https://ezagentauth.com`. This introduces a significant supply chain risk. A compromise or malicious change to this external service could directly impact the security of users' Google Workspace accounts authenticated via this skill, potentially leading to credential harvesting or unauthorized access. Document the inherent risk of relying on a third-party service for critical authentication. Advise users to consider using the 'Advanced (bring your own OAuth app)' local OAuth flow by setting `GOOGLE_CLIENT_ID` and `GOOGLE_CLIENT_SECRET` environment variables, or to self-host the OAuth worker if they require full control over the authentication chain. | LLM | scripts/auth.py:29 | |
| HIGH | Excessive OAuth Scopes Requested The skill requests broad OAuth scopes for Google Workspace services, including full read/write/modify access to Calendar (`calendar`), Drive (`drive`), Docs (`documents`), Sheets (`spreadsheets`), Gmail (`gmail.modify`), and Presentations (`presentations`), as well as message sending capabilities for Chat (`chat.messages`). While these permissions align with the skill's stated functionality, they grant significant control over a user's Google Workspace data. If the agent or skill is compromised, these broad permissions could be exploited for extensive data manipulation, deletion, or exfiltration across multiple critical services. Review if all requested scopes are strictly necessary for every function. Consider implementing a mechanism for users to grant more granular permissions based on specific use cases (e.g., `gmail.send` instead of `gmail.modify` if only sending is required, or `drive.file` for specific file access). Clearly communicate the extent of permissions requested to users. | LLM | scripts/auth.py:36 | |
| HIGH | Direct Access to Read Sensitive User Data Multiple commands within the skill (e.g., `gmail.py get`, `drive.py download`, `docs.py get`, `slides.py text`, `people.py me`, `chat.py messages`, `sheets.py get`) are designed to read and output potentially sensitive user data from Google Workspace services to `stdout`. While this is the intended functionality of the skill, it means that a compromised LLM agent could be instructed to read and exfiltrate large amounts of private user data (emails, documents, drive files, contacts, chat messages, spreadsheet data) without further user interaction, given the broad permissions granted to the skill. This is an inherent risk of a skill designed to access and retrieve user data. Mitigation primarily lies with the security of the LLM agent itself and the policies governing its use. Users should be made explicitly aware of the data access capabilities of this skill and only grant it to trusted agents. Implement strict access controls and monitoring for agents using this skill. | LLM | SKILL.md:30 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/araa47/ez-google/scripts/slides.py:6 | |
| MEDIUM | Sensitive OAuth Token Stored Locally The Google OAuth refresh and access tokens are stored in plain JSON format within `~/.simple-google-workspace/token.json`. These tokens grant persistent access to the user's Google Workspace services. While local storage is common for OAuth tokens, if the agent's execution environment is compromised, or if the file permissions for `token.json` are not adequately secured, an attacker could exfiltrate this file and gain unauthorized, persistent access to the user's Google account. Advise users to ensure proper file system permissions (e.g., `chmod 600 token.json`) for the `~/.simple-google-workspace` directory and its contents to restrict access. Emphasize that the security of the agent's execution environment is critical to protect these credentials. | LLM | scripts/auth.py:50 |
Scan History
Embed Code
[](https://skillshield.io/report/b669eaf22fea5768)
Powered by SkillShield