Trust Assessment
Slipbot received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unsanitized user input in filename generation, Trusting user input for metadata without explicit sanitization, Unsanitized user input in query mechanism.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsanitized user input in filename generation The skill describes generating a filename using a 'slug' derived 'from content passed in (max 4-5 words)'. If this 'content passed in' originates from untrusted user input, and the slug generation process does not rigorously sanitize it to prevent path separators (e.g., '/', '\', '..') or characters that could escape a shell command (e.g., backticks, semicolons, dollar signs), an attacker could craft input to:
1. Create files outside the intended 'slipbox' directory (path traversal).
2. Inject commands if the generated filename is later used in a shell command without proper escaping. While the description mentions 'lowercase, hyphenated', this is insufficient to guarantee safety against all such attacks. Implement strict sanitization for the slug generation process. Ensure that the slug only contains alphanumeric characters and hyphens, explicitly disallowing any path separators or shell-special characters. Use a robust library function that specifically generates URL-safe or filename-safe slugs. | LLM | SKILL.md:32 | |
| HIGH | Unsanitized user input in query mechanism The skill describes responding to 'natural queries like: "Show me notes about X"'. If the 'X' part of the query is directly used in a backend search mechanism (e.g., a shell command like 'grep', 'find', or a database query) without proper sanitization and escaping, an attacker could inject malicious commands or manipulate the query logic. If the search is LLM-driven, direct injection of 'X' could lead to prompt injection, allowing the attacker to manipulate the LLM's behavior. All user input for queries must be strictly sanitized and escaped before being passed to any backend search mechanism (e.g., shell commands, database queries, or LLM prompts). Use parameterized queries or safe API calls instead of string concatenation for constructing commands. For LLM-driven queries, employ prompt engineering techniques to isolate user input and prevent injection. | LLM | SKILL.md:122 | |
| MEDIUM | Trusting user input for metadata without explicit sanitization The skill explicitly states 'No external API calls - trust user input' when processing source information. This implies that various metadata fields (e.g., source title, type, author, note title, tags, link reasons) are directly taken from untrusted user input. If these fields are stored in YAML frontmatter or the 'graph.json' file without proper validation and sanitization, and are later:
1. Processed by an LLM (e.g., for querying or linking), it could lead to prompt injection.
2. Used in shell commands (e.g., 'grep' for querying, or 'sed' for updating files), it could lead to command injection.
3. Rendered in a user interface, it could lead to Cross-Site Scripting (XSS) if markdown content or metadata contains malicious scripts. The rubric does not specify sanitization for these fields, only 'trust user input'. Implement robust input validation and sanitization for all user-provided metadata fields (title, source fields, tags, link reasons, and note content). This includes escaping special characters relevant to YAML, JSON, markdown, and any shell commands or database queries that might process this data. When displaying content, ensure proper output encoding. | LLM | SKILL.md:40 |
Scan History
Embed Code
[](https://skillshield.io/report/5af1975aba3ce8b6)
Powered by SkillShield