Trust Assessment
cognary-tasks received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via User Input, Direct Request for API Key Exposure, Untrusted Global Package Installation Instruction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User Input The skill constructs shell commands using user-provided input for task titles, notes, and categories. If these inputs are not properly sanitized and shell-escaped before execution, a malicious user could inject arbitrary shell commands. For example, a title like `"my task"; rm -rf /` could lead to command execution. Ensure all user-provided arguments (e.g., title, notes, category, search query) are rigorously sanitized and properly shell-escaped before being passed to `cognary-cli` commands. Use a robust library or function designed for safe shell argument quoting. | LLM | SKILL.md:30 | |
| HIGH | Direct Request for API Key Exposure The skill explicitly instructs the user to provide the `COGNARY_API_KEY` directly to the LLM/skill for configuration. This practice exposes a sensitive credential in the conversational context, increasing the risk of accidental logging, exfiltration, or misuse if the LLM's environment is not perfectly secure. Avoid directly asking users for API keys in plain text. Instead, leverage secure credential management mechanisms provided by the platform (e.g., environment variables, secure vaults, or OAuth flows) that do not expose the raw key to the LLM's conversational context. If direct input is unavoidable, ensure the key is immediately masked/redacted after use and never stored or logged. | LLM | SKILL.md:17 | |
| MEDIUM | Untrusted Global Package Installation Instruction The skill instructs the host system to install `cognary-cli` globally via `npm install -g cognary-cli`. Installing packages from external registries, especially globally, introduces a supply chain risk. If the `cognary-cli` package or its dependencies are compromised, the host system could be affected. Recommend verifying the authenticity and integrity of `cognary-cli` before installation. Suggest using package managers with integrity checks (e.g., `npm audit`) and pinning specific versions to prevent unexpected or malicious updates. Consider executing such installations within a sandboxed or isolated environment to limit potential impact. | LLM | SKILL.md:12 |
Scan History
Embed Code
[](https://skillshield.io/report/a8313eb878bdeefb)
Powered by SkillShield