Trust Assessment
qmd received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 3 high, 1 medium, and 0 low severity. Key findings include Sensitive environment variable access: $HOME, Unpinned dependency from GitHub URL, Potential for command injection via `qmd` arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned dependency from GitHub URL The skill installs the `qmd` tool directly from a GitHub repository URL (`https://github.com/tobi/qmd`) without specifying a commit hash, tag, or version. This means the installed code could change at any time, potentially introducing vulnerabilities or malicious code without warning. This applies to both the installation instruction in the `SKILL.md` body and the manifest. Pin the dependency to a specific commit hash, tag, or version (e.g., `https://github.com/tobi/qmd#v1.2.3` or `https://github.com/tobi/qmd#<commit_hash>`) to ensure reproducible and secure installations. | LLM | SKILL.md:39 | |
| HIGH | Potential for command injection via `qmd` arguments The skill demonstrates `qmd` commands that take user-supplied strings as arguments (e.g., search queries, file paths). If an LLM agent constructs these commands using untrusted user input without proper sanitization (e.g., escaping shell metacharacters), it could lead to arbitrary command execution on the host system. Examples include `qmd search "query"` and `qmd get "path/to/file.md"` where the quoted strings are placeholders for user input. Implement robust input sanitization and shell escaping for all user-provided arguments before constructing and executing `qmd` commands. Consider using a library or framework that handles command execution securely. | LLM | SKILL.md:99 | |
| HIGH | Direct file content retrieval capability The `qmd get` command, explicitly described in the skill, allows for the retrieval of the full content of specified local files. An agent, if compromised or misdirected, could use this capability to read and potentially exfiltrate sensitive files from the host system. While this is an intended feature of the `qmd` tool, its exposure to an LLM agent without strict controls presents a significant data exfiltration risk. Implement strict access controls and validation on the paths provided to `qmd get` when invoked by an agent. Ensure the agent is only allowed to retrieve files from explicitly permitted directories or collections, and prevent access to sensitive system paths. Consider adding a confirmation step for retrieving files outside of designated skill directories. | LLM | SKILL.md:119 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/levineam/qmd-external/SKILL.md:23 |
Scan History
Embed Code
[](https://skillshield.io/report/7e10af7dc99153f6)
Powered by SkillShield