Trust Assessment
spec-generator received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 2 critical, 3 high, 1 medium, and 0 low severity. Key findings include Direct Shell Command Execution (`grep -r`), Direct Shell Command Execution (`ls`), Direct External Tool Execution (`specweave`).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct Shell Command Execution (`grep -r`) The skill explicitly instructs the AI agent to execute a `grep -r` shell command. This allows for arbitrary command execution if an attacker can influence the command string or if the agent's environment allows for broader shell access. This is a direct command injection vulnerability. Avoid direct shell command execution. If file content analysis is required, use a sandboxed, language-native file reading and parsing mechanism. Ensure the agent's execution environment is strictly sandboxed to prevent arbitrary command execution. | LLM | SKILL.md:80 | |
| CRITICAL | Direct External Tool Execution (`specweave`) The skill explicitly instructs the AI agent to execute an external `specweave` command. This is a direct command injection vulnerability. If an attacker can influence the arguments passed to `specweave` or if the `specweave` tool itself is compromised or vulnerable, it could lead to arbitrary code execution or other system compromises. Avoid direct execution of external tools. If functionality from `specweave` is required, consider integrating it as a library or using a highly restricted, sandboxed execution environment with strict input validation. Pin the version of `specweave` to mitigate supply chain risks. | LLM | SKILL.md:86 | |
| HIGH | Direct Shell Command Execution (`ls`) The skill explicitly instructs the AI agent to execute an `ls` shell command. While `ls` itself is less dangerous than `grep -r` or `specweave`, it still represents a direct command injection vector. An attacker could potentially influence the directory path or arguments, leading to information disclosure or other exploits if the agent's environment is not strictly sandboxed. Avoid direct shell command execution. If directory listing is required, use a sandboxed, language-native file system API. Ensure the agent's execution environment is strictly sandboxed. | LLM | SKILL.md:79 | |
| HIGH | Potential Data Exfiltration via File Content Analysis The skill instructs the agent to read content from multiple `spec.md` files using `grep -r`. The output of this command, which includes project names and potentially other sensitive metadata from past increments, is then analyzed by the agent. If this analyzed data is subsequently included in prompts to an external LLM, it constitutes data exfiltration. The broad scope of the `grep -r` command increases the risk of inadvertently exposing sensitive information. Implement strict data sanitization and filtering before including any local file content in prompts to external LLMs. Only extract and use the absolute minimum necessary information. Consider redacting or hashing sensitive identifiers. | LLM | SKILL.md:80 | |
| HIGH | Unpinned External Tool Dependency (`specweave`) The skill relies on the external `specweave` command without specifying a version or source. This introduces a significant supply chain risk. If a malicious or vulnerable version of `specweave` is installed on the host system, the AI agent executing this skill could be compromised, leading to arbitrary code execution or data manipulation. Pin the version of all external tools and dependencies. Specify the exact version of `specweave` required and ensure its integrity (e.g., via checksums). Consider providing a mechanism for secure installation or verification of the tool. | LLM | SKILL.md:86 | |
| MEDIUM | Potential Data Exfiltration via Configuration File Reading The skill instructs the AI agent to read `config.json` and `config.yaml` files to determine project configuration. These files can often contain sensitive information such as API keys, internal network details, or other credentials. If the agent processes this information and subsequently includes it in prompts to an external LLM without proper sanitization, it could lead to data exfiltration. Implement strict data sanitization and filtering before including any local file content in prompts to external LLMs. Only extract and use the absolute minimum necessary configuration information. Redact or hash sensitive values. | LLM | SKILL.md:81 |
Scan History
Embed Code
[](https://skillshield.io/report/c99059b3ffb2fc3f)
Powered by SkillShield