Trust Assessment
notion received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 0 medium, and 1 low severity. Key findings include Potential Command Injection via unsanitized user input in shell commands, File reading pattern demonstrated, could be generalized for exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via unsanitized user input in shell commands The skill documentation provides `curl` command examples that are intended to be executed in a shell. If an LLM constructs these commands by directly interpolating untrusted user input (e.g., for `page_id`, `data_source_id`, or JSON payload values) without proper shell escaping or JSON sanitization, it could lead to arbitrary command execution or JSON injection. For instance, if a user-provided `page_id` contains shell metacharacters, it could break out of the `curl` command. Similarly, if JSON payload values are not sanitized, an attacker could inject malicious JSON structures. When generating and executing shell commands based on user input, ensure all user-provided values are properly shell-escaped (e.g., using `shlex.quote` in Python) before being interpolated into the command string. For JSON payloads, ensure user input is properly JSON-escaped before being inserted into the JSON structure. | LLM | SKILL.md:39 | |
| LOW | File reading pattern demonstrated, could be generalized for exfiltration The skill demonstrates reading an API key from a local file using `cat ~/.config/notion/api_key`. While intended for legitimate setup, this pattern could be generalized by an LLM to read arbitrary files from the local filesystem if prompted by a malicious user, potentially leading to data exfiltration. When an LLM is tasked with executing commands that involve file system access, ensure strict validation and sandboxing of file paths. Avoid allowing the LLM to construct arbitrary file paths based on untrusted input. | LLM | SKILL.md:24 |
Scan History
Embed Code
[](https://skillshield.io/report/ad508b40fb0288e5)
Powered by SkillShield