Trust Assessment
whatdo received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via Dynamic Shell Execution, Excessive Permissions: Direct Shell Access, Potential Data Exfiltration via File Read (USER.md).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via Dynamic Shell Execution The skill explicitly demonstrates the use of `curl` to interact with the Google Calendar API. The JSON payload for this `curl` command includes fields such as 'summary', 'location', 'description', and 'attendees' which are likely to be populated by dynamic data, potentially from untrusted user input. If these fields are not rigorously sanitized and escaped for both JSON and shell contexts before being interpolated into the `curl` command, an attacker could inject arbitrary shell commands or manipulate the JSON structure to achieve unintended execution. Avoid direct shell execution with dynamically constructed commands. Instead, use a dedicated, well-vetted API client library for Google Calendar that handles parameterization and escaping securely. If direct shell execution is unavoidable, implement robust input validation, sanitization, and shell escaping for all dynamic components of the command and its payload. | LLM | SKILL.md:401 | |
| HIGH | Excessive Permissions: Direct Shell Access The skill's instruction to execute `curl` commands implies direct access to the underlying shell environment. This grants the AI agent broad permissions, allowing it to execute arbitrary commands if a command injection vulnerability (as identified above) is exploited. Direct shell access significantly increases the attack surface and the potential impact of a compromise. Restrict the AI agent's execution environment to prevent direct shell access. Utilize sandboxed execution environments or abstract away shell commands behind secure, parameterized API calls or dedicated tools that do not expose the underlying shell. | LLM | SKILL.md:401 | |
| HIGH | Credential Exposure Risk via Shell Command Arguments The `GOOGLE_CALENDAR_ACCESS_TOKEN` is used directly in the `Authorization` header of a `curl` command, which is executed in the shell. While this is a legitimate use, it exposes the token to the shell environment. If a command injection vulnerability exists (as identified), an attacker could potentially exfiltrate this access token by manipulating the shell command to log or transmit the token. Store sensitive credentials like `GOOGLE_CALENDAR_ACCESS_TOKEN` in a secure secrets management system. When used, ensure they are injected into the execution environment in a way that minimizes their exposure, ideally not as direct shell arguments or environment variables accessible to arbitrary commands. Prefer using dedicated API client libraries that handle authentication securely without exposing raw tokens to the shell. | LLM | SKILL.md:403 | |
| MEDIUM | Potential Data Exfiltration via File Read (USER.md) The skill instructs the agent to 'Read USER.md for the user's current location'. While the intent is to retrieve specific information (location), an LLM, if not strictly constrained, might be prompted to read and potentially output the entire contents of `USER.md`. If `USER.md` contains sensitive personal information beyond just location, this could lead to data exfiltration. Ensure that file access operations are highly granular, allowing the LLM to access only specific, pre-defined data points within a file rather than reading its entire content. Implement strict output filtering and sanitization to prevent the LLM from inadvertently exposing sensitive file contents. | LLM | SKILL.md:199 |
Scan History
Embed Code
[](https://skillshield.io/report/035ada5a21245481)
Powered by SkillShield