Trust Assessment
qmd received a trust score of 63/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 2 medium, and 0 low severity. Key findings include Sensitive environment variable access: $HOME, Unpinned external dependency installation via shell command, Broad filesystem read access enabling data exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned external dependency installation via shell command The skill installs the `qmd` tool directly from an unpinned GitHub repository (`https://github.com/tobi/qmd`) using a global `bun install -g` command. This constitutes a significant supply chain risk. If the remote repository is compromised, malicious code could be injected into the `qmd` package and subsequently executed on the host system during installation, leading to arbitrary command injection. Pin the dependency to a specific commit hash (e.g., `https://github.com/tobi/qmd#<commit-hash>`) to ensure immutability and prevent unexpected changes. Consider using a trusted package registry with integrity checks if available. | LLM | SKILL.md:31 | |
| HIGH | Broad filesystem read access enabling data exfiltration The `qmd` tool is designed to index and search local files, including user notes and documents. Commands like `qmd get "path/to/file.md"` or `qmd search --full` allow the retrieval of full document content from arbitrary paths specified by the user or agent. This grants the skill, and by extension the agent using it, broad read access to the local filesystem. A malicious or compromised agent could leverage this capability to read sensitive local files and exfiltrate their contents. Implement strict sandboxing for the agent's execution environment to limit filesystem access. Limit the directories that `qmd` can index to only non-sensitive paths. Require explicit user confirmation for access to sensitive file paths or for retrieving full document content. | LLM | SKILL.md:105 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/emcmillan80/qmd-markdown-search/SKILL.md:23 | |
| MEDIUM | Potential command injection through unsanitized arguments The skill provides examples of `qmd` commands that accept user-controlled arguments, such as file paths (`/path/to/notes`), search queries (`"query"`), and document IDs (`"#docid"`). If an agent directly interpolates untrusted user input into these arguments without proper sanitization, and if the `qmd` binary itself is vulnerable to shell injection or argument parsing exploits, a malicious user could execute arbitrary commands on the host system. The agent integrating this skill must rigorously sanitize all user-provided input before constructing and executing `qmd` commands. Input should be properly escaped or validated against expected patterns. The `qmd` tool itself should be robust against shell injection in its argument parsing. | LLM | SKILL.md:37 |
Scan History
Embed Code
[](https://skillshield.io/report/109935078100e6b0)
Powered by SkillShield