Trust Assessment
drafts received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Unpinned dependency in installation instructions, Skill enables direct access and potential exfiltration of user notes.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned dependency in installation instructions The installation instructions for the `drafts` CLI tool use `@latest` for the Go package. This means the dependency is not pinned to a specific version. An unpinned dependency can lead to unexpected behavior, breaking changes, or the introduction of malicious code if a future version of the `github.com/nerveband/drafts` repository is compromised or publishes a malicious update. This poses a supply chain risk. Pin the dependency to a specific, known-good version (e.g., `github.com/nerveband/drafts/cmd/drafts@v1.2.3`) to ensure reproducibility and mitigate supply chain risks from future malicious updates. | LLM | SKILL.md:30 | |
| HIGH | Skill enables direct access and potential exfiltration of user notes The `drafts` CLI tool, which this skill exposes, provides direct access to a user's Drafts notes. Specifically, commands like `drafts get <uuid>` allow retrieving the full content of any draft, and `drafts run "Copy" -u <uuid>` can copy draft content to the clipboard. User notes can contain highly sensitive personal or professional information. An attacker could craft a prompt injection to instruct the LLM to read and exfiltrate the content of specific or all user drafts, leading to data exfiltration. Implement strict input validation and output filtering for LLM interactions with this tool. Consider adding a confirmation step for sensitive data retrieval operations or limiting the scope of data access if possible. Ensure the LLM environment is sandboxed to prevent direct exfiltration of sensitive data via its responses or other channels. | LLM | SKILL.md:69 |
Scan History
Embed Code
[](https://skillshield.io/report/aeacc0fd371d47fe)
Powered by SkillShield