Trust Assessment
qmd received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Command Injection via user-controlled search query, Potential Data Exfiltration via 'Read tool' on arbitrary file paths.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Command Injection via user-controlled search query The skill's workflow explicitly states it will "Run appropriate search command" using user-provided `$ARGUMENTS`. If the `$ARGUMENTS` are directly interpolated into a shell command without proper sanitization or escaping, a malicious user could inject arbitrary shell commands (e.g., `'; rm -rf /'`). The skill description does not specify any sanitization for user input before command execution, making this a high-risk command injection vulnerability. Implement robust input sanitization and shell escaping for all user-provided arguments (`$ARGUMENTS`) before constructing and executing any shell commands. Use a safe command execution mechanism that prevents shell injection, such as passing arguments as a list to `subprocess.run()` instead of a single shell string. | LLM | SKILL.md:75 | |
| HIGH | Potential Data Exfiltration via 'Read tool' on arbitrary file paths The skill's workflow states it will "Present results to user with file paths" and "If user wants to read a specific result, use the Read tool on the file path". While the `qmd` tool is intended for markdown knowledge bases, the 'Read tool' is generic. If a malicious user can manipulate the search query to return paths to sensitive system files (e.g., `/etc/passwd`, API keys, configuration files) or if the 'Read tool' can be invoked with an arbitrary, user-controlled path, this could lead to unauthorized data exfiltration. Ensure the 'Read tool' is strictly sandboxed and can only access files within the designated markdown knowledge base directories. Implement strict path validation and canonicalization to prevent directory traversal attacks. Do not allow the 'Read tool' to be invoked with arbitrary user-supplied paths. | LLM | SKILL.md:78 |
Scan History
Embed Code
[](https://skillshield.io/report/49798b17d0c05ae5)
Powered by SkillShield