Trust Assessment
google-calendar received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized User Input in Shell Commands, Potential Command Injection via Unsanitized User Input in Shell Commands (Quick Add), Unpinned Dependency in Setup Instructions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Unsanitized User Input in Shell Commands The skill documentation provides `bash` snippets that directly interpolate shell variables (e.g., `EVENT_ID`, `CALENDAR_ID`, `text`) into `curl` commands. If an AI agent or user constructs these commands by directly inserting untrusted user input into these variables without proper shell escaping, it could lead to command injection. An attacker could craft malicious input that, when interpolated, executes arbitrary shell commands on the host system. When constructing shell commands from user input, ensure all variables derived from untrusted sources are properly shell-escaped. For example, use `shlex.quote()` in Python or similar functions in other languages. Alternatively, use a robust API client library that handles parameter sanitization automatically instead of direct shell execution. | LLM | SKILL.md:109 | |
| HIGH | Potential Command Injection via Unsanitized User Input in Shell Commands (Quick Add) The 'Quick Add' example demonstrates constructing a `curl` command where the `text` parameter is part of the URL. If an AI agent or user constructs this URL by directly inserting untrusted user input into the `text` parameter without proper shell escaping (even if URL-encoded afterwards), it could lead to command injection. An attacker could craft malicious input containing shell metacharacters that are executed before the `curl` command is fully formed. When constructing shell commands from user input, ensure all variables derived from untrusted sources are properly shell-escaped. For example, use `shlex.quote()` in Python or similar functions in other languages. Additionally, ensure that any user-provided text for URL parameters is both shell-escaped and URL-encoded. | LLM | SKILL.md:178 | |
| MEDIUM | Unpinned Dependency in Setup Instructions The setup instructions recommend installing `gcalcli` using `pip install gcalcli` without specifying a version. This introduces a supply chain risk, as a future malicious or vulnerable version of `gcalcli` could be installed, potentially compromising the system. Pin the dependency to a specific, known-good version (e.g., `pip install gcalcli==X.Y.Z`) to ensure consistent and secure installations. Regularly review and update pinned dependencies. | LLM | SKILL.md:19 |
Scan History
Embed Code
[](https://skillshield.io/report/31339c224bffcf30)
Powered by SkillShield