Trust Assessment
spec-generator received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Missing required field: name, Skill instructs execution of external command, Processing unsanitized output from external command.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 15, 2026 (commit 1823c3f6). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill instructs execution of external command The skill explicitly instructs the LLM to execute an external command (`specweave context projects`). This allows for arbitrary code execution if the `specweave` command or its arguments can be manipulated by untrusted input, or if the `specweave` tool itself is malicious. Even if `specweave` is benign, allowing an LLM to execute shell commands is a significant security risk, as it grants the LLM capabilities beyond its intended scope. Avoid instructing the LLM to execute arbitrary shell commands. If external tool interaction is necessary, use a sandboxed environment, a dedicated and strictly controlled API, or a predefined set of safe, parameterized commands. Ensure all inputs to external commands are thoroughly sanitized and validated. | LLM | SKILL.md:13 | |
| HIGH | Processing unsanitized output from external command The skill instructs the LLM to 'Parse the JSON output' from the `specweave context projects` command. If the output of this external command can be influenced by an attacker (e.g., through malicious project names or board IDs containing prompt injection payloads), and this output is then processed by the LLM without proper sanitization, it could lead to prompt injection, allowing an attacker to manipulate the LLM's subsequent behavior. When processing output from external commands, especially if that output might contain user-controlled data, ensure robust sanitization and validation. Treat all external output as untrusted. Consider using structured data parsing that strictly adheres to a schema, rather than directly feeding raw text into the LLM's context. | LLM | SKILL.md:14 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | plugins/specweave/skills/spec-generator/SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/16e67205671185f0)
Powered by SkillShield