Trust Assessment
task-tracker received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via LLM Prompt Generation, Command Injection via Unsanitized Output for Execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via LLM Prompt Generation The `extract_prompt` function in `scripts/extract_tasks.py` embeds untrusted user input (`text`) directly into a prompt intended for a Language Model (LLM). This allows an attacker to inject malicious instructions or data into the LLM's context, potentially leading to unauthorized actions, data exfiltration, or manipulation of the LLM's behavior. The `text` variable is taken directly from user-provided `--from-text` or `--from-file` arguments without sanitization or proper delimiting for LLM consumption. Implement robust input sanitization or use proper LLM prompt templating techniques (e.g., XML tags, JSON, or other structured delimiters) to clearly separate user input from system instructions within the generated prompt. Ensure the LLM is instructed to strictly adhere to the delimiters and not interpret content within them as instructions. | LLM | scripts/extract_tasks.py:100 | |
| HIGH | Command Injection via Unsanitized Output for Execution The `format_task_command` function in `scripts/extract_tasks.py` constructs shell commands (`tasks.py add "..."`) using user-provided task titles and other details. While the script itself does not execute these commands, it explicitly instructs a downstream LLM to output commands in this format for execution. If the `task["title"]` (derived from untrusted user input) contains shell metacharacters (e.g., `"task"; rm -rf /`), and the generated command is subsequently executed by an LLM agent or user without proper shell escaping, it could lead to arbitrary command injection on the host system. Before embedding user-controlled strings into shell commands, ensure they are properly escaped for the target shell environment. For Python, consider using `shlex.quote()` for shell command arguments. Alternatively, if the commands are intended for an LLM to execute, provide clear instructions to the LLM to sanitize or escape arguments before execution, or design the interaction such that the LLM calls a structured API instead of raw shell commands. | LLM | scripts/extract_tasks.py:61 |
Scan History
Embed Code
[](https://skillshield.io/report/8578940b46f8a281)
Powered by SkillShield