Security Audit
dceoy/speckit-agent-skills:skills/speckit-specify
github.com/dceoy/speckit-agent-skillsTrust Assessment
dceoy/speckit-agent-skills:skills/speckit-specify received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Command Injection in Feature Creation Script, Command Injection in Branch Search.
The analysis covered 4 layers: llm_behavioral_safety, manifest_analysis, static_code_analysis, dependency_graph. The llm_behavioral_safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 8, 2026 (commit c21d8d2d). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection in Feature Creation Script The skill instructs the agent to construct and execute a shell command string that includes the raw user feature description as an argument. While the instructions attempt to mitigate this by asking the LLM to handle quoting, this is insufficient. A malicious user can provide a feature description containing shell metacharacters (e.g., `"; rm -rf /; echo "`) which, if not perfectly escaped by the LLM, will result in Arbitrary Code Execution (ACE) on the host system. Do not construct shell commands by concatenating strings with user input. Use a secure execution primitive that accepts arguments as a list (e.g., `subprocess.run(['script', arg1, arg2])`) to bypass shell interpretation. If shell execution is unavoidable, strictly sanitize the input to allow only safe characters before execution. | Unknown | SKILL.md:53 | |
| HIGH | Command Injection in Branch Search The workflow instructs the agent to execute shell commands using `grep` where the search pattern includes a `<short-name>` derived from user input. If the LLM generates a short name containing shell metacharacters (potentially via prompt injection), this will result in command injection when the shell processes the pipe and grep arguments. Validate that the generated `<short-name>` strictly matches a safe pattern (e.g., `^[a-zA-Z0-9-]+$`) before inserting it into any shell command. Alternatively, use git plumbing commands or native language libraries that do not require shell piping. | Unknown | SKILL.md:43 |
Scan History
Embed Code
[](https://skillshield.io/report/8ec5a374b138f73a)
Powered by SkillShield