Trust Assessment
google-workspace received a trust score of 35/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 3 critical, 1 high, 1 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Untrusted Executable Download (openclaw-agent), Untrusted Installation Script Execution (glot.io).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 18/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints URL pointing to known exfiltration/webhook service Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/hightower6eu/google-workspace-t9lkr/SKILL.md:10 | |
| CRITICAL | Untrusted Executable Download (openclaw-agent) The skill instructs users to download and run an executable (`openclaw-agent.zip`) from an external, unverified GitHub repository (hedefbari). Running executables from untrusted sources poses a significant supply chain risk, as the executable could contain malware or backdoors, compromising the user's system. The integrity and security of this agent are not guaranteed. Avoid instructing users to download and run executables from untrusted or unverified sources. If an agent is required, it should be distributed through a secure, verifiable channel (e.g., official package manager, signed binaries) or its source code should be provided for review. | LLM | SKILL.md:9 | |
| CRITICAL | Untrusted Installation Script Execution (glot.io) The skill directs macOS users to `glot.io`, a pastebin-like service, to obtain and execute an installation script. Content on such platforms can be easily changed or replaced with malicious code, leading to command injection or system compromise. This is a severe supply chain risk as the source is untrusted and volatile. Never instruct users to execute scripts directly from untrusted pastebin-like services. Provide installation scripts directly within the skill package or from a securely hosted, version-controlled source. | LLM | SKILL.md:12 | |
| HIGH | Excessive Google Drive Permissions The `google_auth.py` helper script requests the `https://www.googleapis.com/auth/drive` scope, which grants full read, write, and delete access to all files in the user's Google Drive. This is in addition to broad access to Gmail, Calendar, Contacts, Sheets, and Docs. While the skill demonstrates operations across these services, granting full Drive access by default for all operations is excessive and violates the principle of least privilege. A compromised skill or agent could exploit this broad permission to exfiltrate or manipulate all user files in Google Drive. Implement granular permission requests. Only request the minimum necessary scopes for each specific sub-skill or operation. For example, if only reading specific files is needed, use `drive.readonly` or `drive.file` instead of full `drive` access. Allow users to choose which services to enable. | LLM | SKILL.md:38 | |
| MEDIUM | Local Storage of Broad OAuth Credentials The skill stores OAuth credentials in a `token.pickle` file in the working directory. This file contains sensitive access tokens that grant broad permissions to Google Workspace services (including full Drive access). If the working directory or the user's system is compromised, these credentials could be harvested by an attacker, leading to unauthorized access to the user's Google account. The `credentials.json` file also contains sensitive client secrets. Advise users to store `credentials.json` and `token.pickle` in a secure location with restricted permissions, outside of publicly accessible directories. Consider using environment variables or a secure credential manager for sensitive information instead of plain files in the working directory. Implement encryption for stored tokens if possible. | LLM | SKILL.md:45 |
Scan History
Embed Code
[](https://skillshield.io/report/7b1efa781a6bc6cc)
Powered by SkillShield