Security Audit
dceoy/speckit-agent-skills:skills/speckit-implement
github.com/dceoy/speckit-agent-skillsTrust Assessment
dceoy/speckit-agent-skills:skills/speckit-implement received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Direct Shell Script Execution from Untrusted Repository, Direct Execution of System Commands and Implicit Code Execution, Broad File System Write and Modification Permissions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on April 1, 2026 (commit a934d48e). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct Shell Script Execution from Untrusted Repository The skill explicitly instructs the LLM to execute a shell script located within the repository, `.specify/scripts/bash/check-prerequisites.sh`. This script is part of the untrusted content. An attacker who can modify this script in the repository can achieve arbitrary command execution on the host system where the LLM is running. Avoid direct execution of scripts from untrusted repositories. If execution is necessary, sandbox the execution environment, strictly validate script content, or use a predefined, trusted set of commands/tools. | LLM | SKILL.md:22 | |
| HIGH | Direct Execution of System Commands and Implicit Code Execution The skill explicitly instructs the LLM to execute `git rev-parse --git-dir`. While `git` is a standard utility, this demonstrates a pattern of direct shell command execution. More critically, the skill's core purpose is to 'Execute implementation following the task plan' (Workflow Step 6) and 'Core development' (Workflow Step 7), which inherently involves generating and executing code, potentially including arbitrary shell commands, based on the untrusted `tasks.md` and `plan.md` files. This creates a broad command injection surface. Restrict the LLM's ability to execute arbitrary shell commands. Implement a strict allowlist of safe commands and arguments, or use a sandboxed execution environment. Ensure that any code generated by the LLM is thoroughly reviewed and executed in a secure, isolated context. | LLM | SKILL.md:68 | |
| HIGH | Broad File System Write and Modification Permissions The skill explicitly states it will create and modify various ignore files (`.gitignore`, `.dockerignore`, etc.) (Workflow Step 4) and update `tasks.md` by marking tasks as complete (Workflow Step 8). Furthermore, the skill's primary output is 'Implementation changes in the codebase', indicating broad write access to the entire repository. This level of access allows the LLM to arbitrarily modify, create, or delete files within the project, posing a significant risk if exploited. Implement fine-grained access control for file system operations. Limit the LLM's write access to only specific, necessary files or directories, and require explicit user confirmation for any significant file modifications or creations. | LLM | SKILL.md:79 | |
| MEDIUM | Potential Prompt Injection from Untrusted Input Documents The skill instructs the LLM to 'Load and analyze' several documents (`tasks.md`, `plan.md`, `data-model.md`, `contracts/`, `research.md`, `quickstart.md`) from the untrusted repository. The content of these documents is directly used to guide the LLM's 'implementation' process. An attacker could embed malicious instructions within these files, attempting to manipulate the LLM's behavior, override its safety guidelines, or steer it towards unintended actions. Implement robust input sanitization and validation for all documents read from the untrusted repository. Use a separate, isolated LLM call for parsing and extracting structured data from these documents, rather than directly feeding their raw content into the main instruction context. Clearly delineate trusted instructions from untrusted data. | LLM | SKILL.md:57 |
Scan History
Embed Code
[](https://skillshield.io/report/5a90ad2169d68f99)
Powered by SkillShield