Security Audit
dceoy/speckit-agent-skills:skills/speckit-clarify
github.com/dceoy/speckit-agent-skillsTrust Assessment
dceoy/speckit-agent-skills:skills/speckit-clarify received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Execution of unanalyzed external shell script, Agent vulnerable to prompt injection from untrusted spec content.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 1, 2026 (commit a934d48e). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Execution of unanalyzed external shell script The skill explicitly instructs the agent to execute an external shell script located at `.specify/scripts/bash/check-prerequisites.sh`. The content of this script is not provided for analysis within the skill package. Executing external scripts without prior security review introduces a significant command injection risk, as a malicious or vulnerable script could perform arbitrary actions, including data exfiltration, system modification, or further command injection. While the arguments `--json --paths-only` are hardcoded, the script itself is an opaque dependency. 1. **Provide the script for analysis**: Include the content of `.specify/scripts/bash/check-prerequisites.sh` in the skill package for security review. 2. **Minimize script privileges**: Ensure the script runs with the least necessary permissions. 3. **Replace with internal logic**: If possible, replace the external script call with equivalent, self-contained logic within the skill itself, reducing reliance on external executables. | LLM | SKILL.md:40 | |
| HIGH | Agent vulnerable to prompt injection from untrusted spec content The skill instructs the agent to load and process an untrusted feature specification (`specs/<feature>/spec.md`). The agent is then tasked with generating clarification questions, analyzing options, and providing recommended/suggested answers based on this untrusted content. A malicious `spec.md` could contain hidden instructions or adversarial prompts designed to manipulate the agent's internal reasoning, influence the generated questions/answers, or even attempt to override the agent's core instructions. This could lead to the agent generating inappropriate content, revealing sensitive information, or behaving unexpectedly. 1. **Implement robust input sanitization**: Before processing, sanitize the `spec.md` content to remove any potential prompt injection attempts or adversarial instructions. This could involve filtering keywords, using a separate LLM for sanitization, or employing content moderation techniques. 2. **Strict output validation**: Implement strict validation and guardrails on the agent's generated questions, recommendations, and answers to ensure they adhere to the intended format and purpose, and do not contain unexpected or malicious content. 3. **Isolate untrusted content**: When feeding the `spec.md` to the agent, clearly delineate it with strong delimiters and instruct the agent to treat it as data, not instructions. | LLM | SKILL.md:38 |
Scan History
Embed Code
[](https://skillshield.io/report/f28d7208f62bf91e)
Powered by SkillShield