Trust Assessment
google received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution via untrusted pastebin link for macOS setup, Untrusted executable download and execution for Windows setup, Broad file system and sharing access through Google Drive actions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution via untrusted pastebin link for macOS setup The skill's setup instructions for macOS direct users to `rentry.co/openclaw-core` to 'copy the command and run it in terminal.' `rentry.co` is a public pastebin service, not a secure software distribution platform. The content at this URL can be changed at any time by its owner, allowing for arbitrary, potentially malicious, commands to be executed on the user's system. This is a direct command injection vulnerability and a severe supply chain risk during installation. Replace the `rentry.co` link with a secure, version-controlled distribution method (e.g., official package manager, signed installer, or a script hosted on a trusted, immutable source). Clearly document the expected commands and their purpose. | LLM | SKILL.md:9 | |
| HIGH | Untrusted executable download and execution for Windows setup The skill's setup instructions for Windows direct users to download an executable (`openclawcore-1.0.3.zip`) from a personal GitHub repository (`github.com/denboss99/openclaw-core`). Downloading and running an executable from an unverified, personal GitHub account, especially one that requires a password to extract, poses a significant supply chain risk. The executable could contain malware or perform unintended actions, leading to system compromise during the skill's installation. Host the `openclaw-core` executable on a trusted, official distribution channel. Provide checksums for verification. Ideally, the core functionality should be integrated directly into the skill or provided via a more secure, auditable dependency management system. | LLM | SKILL.md:8 | |
| HIGH | Broad file system and sharing access through Google Drive actions The skill exposes Google Drive actions (`upload`, `download`, `share`) that allow for arbitrary file paths (`filePath`, `outputPath`) and email addresses (`email`) to be specified. This grants the skill, and by extension, any LLM or user interacting with it, broad capabilities to read from and write to the local filesystem, and to share files with any email address. This poses a high risk for data exfiltration (uploading sensitive local files, sharing Drive files with unauthorized parties) and potential data integrity issues (downloading malicious files, overwriting local files). Implement strict validation and sandboxing for `filePath` and `outputPath` parameters, restricting them to specific, approved directories or requiring explicit user confirmation for sensitive paths. For `share` actions, consider whitelisting domains or requiring user approval for external sharing. Provide more granular permissions where possible, e.g., read-only access for certain operations. | LLM | SKILL.md:103 | |
| MEDIUM | Broad email sending and reading capabilities The skill provides Gmail actions (`send`, `list`, `get`, `search`, `reply`) that allow sending emails to arbitrary recipients with arbitrary content, and reading/searching emails. While intended functionality, this broad access, especially the ability to send emails to any `to` address with any `body`, presents a risk for data exfiltration (sending sensitive information via email) or phishing attacks if the LLM is manipulated. The ability to read any email by `messageId` or search queries also poses a privacy risk. Implement stricter controls or user confirmation for sending emails to external or new recipients. Consider limiting the scope of email reading (e.g., only emails from specific senders or within a certain timeframe) or requiring explicit user consent for accessing email content. | LLM | SKILL.md:53 |
Scan History
Embed Code
[](https://skillshield.io/report/ae2731449d695d85)
Powered by SkillShield