Security Audit
dceoy/speckit-agent-skills:skills/speckit-tasks
github.com/dceoy/speckit-agent-skillsTrust Assessment
dceoy/speckit-agent-skills:skills/speckit-tasks received a trust score of 68/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Direct Shell Script Execution, Prompt Injection via Generated LLM Output, Potential Arbitrary File Read via FEATURE_DIR.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on April 1, 2026 (commit a934d48e). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Direct Shell Script Execution The skill explicitly states it will execute a shell script (`.specify/scripts/bash/check-prerequisites.sh`) from the repository root. While the script path is hardcoded, direct execution of shell scripts introduces a command injection vulnerability. A malicious skill developer could embed arbitrary commands within this script, or the script itself could be vulnerable to environment variables or implicitly passed inputs, leading to arbitrary code execution on the host system. The instruction 'For single quotes in args like "I'm Groot", use escape syntax' further implies that arguments are constructed and passed, which, if not properly sanitized, could lead to injection. Avoid direct execution of shell scripts. If external processes are necessary, use a sandboxed environment or a dedicated tool execution framework that strictly controls inputs and outputs. If a script must be run, ensure it is thoroughly audited, and all arguments passed to it are rigorously sanitized and validated. Consider using a more secure, language-native way to check prerequisites if possible. | LLM | SKILL.md:23 | |
| HIGH | Prompt Injection via Generated LLM Output The skill generates `tasks.md` which is explicitly designed to be 'immediately executable' by a downstream LLM. The content of `tasks.md` is derived from various inputs, including `plan.md`, `spec.md`, and 'user constraints or priorities from the request'. If a malicious user can inject crafted text into any of these input sources, that text could be embedded into the generated `tasks.md` without proper sanitization. This could lead to prompt injection attacks against the LLM that subsequently processes `tasks.md`, allowing the attacker to manipulate the downstream LLM's behavior. Implement robust sanitization and validation for all user-controlled inputs before they are incorporated into the generated `tasks.md`. Specifically, filter out or escape any characters or patterns that could be interpreted as instructions or malicious commands by a downstream LLM. Consider using a strict templating engine that automatically escapes dynamic content. | LLM | SKILL.md:77 | |
| MEDIUM | Potential Arbitrary File Read via FEATURE_DIR The skill reads multiple files (`plan.md`, `spec.md`, `data-model.md`, `contracts/`, `research.md`, `quickstart.md`) from a `FEATURE_DIR`. If the `FEATURE_DIR` variable can be influenced or manipulated by a malicious user (e.g., through a crafted feature name or path traversal sequences like `../../`), it could allow the skill to read arbitrary files from the host filesystem, leading to data exfiltration. While the description mentions 'All paths must be absolute' for the *output* of the prerequisite script, it's not clear how `FEATURE_DIR` itself is determined and validated. Ensure that `FEATURE_DIR` is strictly controlled and validated. If it's derived from user input, implement rigorous path sanitization to prevent path traversal attacks. Restrict file reading operations to a tightly defined and sandboxed directory structure. Implement explicit allow-lists for file extensions and paths if possible. | LLM | SKILL.md:26 |
Scan History
Embed Code
[](https://skillshield.io/report/a8ceba586ecc67fa)
Powered by SkillShield