Trust Assessment
qmd received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via qmd CLI arguments, Broad File System Access and Data Exfiltration Risk, Unpinned Dependency from GitHub URL.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via qmd CLI arguments The skill instructs the LLM to construct and execute `qmd` CLI commands based on user input (e.g., search queries, file paths). If user-provided strings are directly interpolated into these commands without proper sanitization or escaping, an attacker could inject arbitrary shell commands. For example, a malicious search query like `"foo; rm -rf /"` could lead to arbitrary code execution on the host system. Implement robust input sanitization and escaping for all user-provided arguments before constructing `qmd` commands. Consider using a dedicated library for safe command execution or ensuring that the LLM is explicitly instructed to escape special characters. If possible, use a tool execution framework that handles argument passing securely, rather than raw string concatenation for shell commands. | LLM | SKILL.md:24 | |
| HIGH | Broad File System Access and Data Exfiltration Risk The `qmd` tool, particularly the `get` and `multi-get` commands, allows retrieval of local file content. The skill's examples show access to user directories (`~/notes`) and arbitrary paths (`docs/guide.md`). If the LLM is prompted to use these commands with user-controlled file paths or patterns, it could be coerced into reading and exfiltrating sensitive files from the local filesystem (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, configuration files) that are accessible to the user running the skill. The `--full` option for search commands also returns complete document content, increasing the risk. Restrict the LLM's ability to generate arbitrary file paths for `qmd get` and `multi-get` commands. Implement strict validation to ensure requested paths are within designated, non-sensitive knowledge base directories. Consider sandboxing the `qmd` process or limiting its filesystem access to only the intended knowledge base directories. | LLM | SKILL.md:50 | |
| MEDIUM | Unpinned Dependency from GitHub URL The skill's installation instructions use `bun install -g https://github.com/tobi/qmd`. This command installs the `qmd` tool directly from the default branch of a GitHub repository without specifying a version, tag, or commit hash. This makes the skill vulnerable to supply chain attacks: if the upstream repository is compromised, or if the default branch is updated with malicious code, future installations of this skill would unknowingly pull and execute the compromised version. Pin the dependency to a specific version, tag, or commit hash (e.g., `bun install -g https://github.com/tobi/qmd#v1.2.3` or `bun install -g https://github.com/tobi/qmd#<commit_hash>`). This ensures deterministic installations and reduces the risk of unexpected or malicious code changes. | LLM | SKILL.md:13 |
Scan History
Embed Code
[](https://skillshield.io/report/54da69008668a9e2)
Powered by SkillShield