Trust Assessment
email-template-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, User input directly embedded in LLM prompt, Arbitrary file write via user-controlled output path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary file write via user-controlled output path The skill allows users to specify an arbitrary output file path via the `-o` or `--output` command-line option. The content generated by the LLM is then written to this user-controlled path using `fs.writeFileSync`. An attacker can exploit this by providing a path to a sensitive system file (e.g., `/etc/passwd`, `~/.bashrc`, `~/.ssh/authorized_keys`) or other critical files, potentially overwriting them with arbitrary content generated by the LLM (which could be malicious if combined with prompt injection). This could lead to denial of service, privilege escalation, or system compromise. Restrict the output file path to a safe, designated directory (e.g., a temporary directory or a user-specific output folder). Validate and sanitize the provided file path to prevent directory traversal attacks (e.g., `../`, absolute paths). Do not allow writing to arbitrary locations on the filesystem. | LLM | src/cli.ts:20 | |
| HIGH | User input directly embedded in LLM prompt The `description` argument, provided by the user via the command line, is directly concatenated into the LLM's user message without any sanitization or validation. This creates a classic prompt injection vulnerability, allowing an attacker to manipulate the LLM's behavior by crafting a malicious `description`. This could lead to the LLM generating unintended or harmful content, ignoring system instructions, or attempting to disclose information if it had access to any. Implement robust input validation and sanitization for user-provided `description`. Consider using a templating approach that strictly separates user input from system instructions, or employ LLM-specific prompt injection defenses such as input/output parsing and instruction tuning. | LLM | src/index.ts:20 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/email-template-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/53c6ca5f8d899e19)
Powered by SkillShield