Trust Assessment
next-config-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Prompt Injection via User-Controlled File Content, Arbitrary File Write via User-Controlled Output Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User-Controlled File Content The skill directly concatenates the content of the user-specified `package.json` file into the LLM's user prompt. An attacker can craft a malicious `package.json` file in their project directory (or point the `dir` argument to a location with such a file) to manipulate the LLM's instructions, leading it to generate arbitrary, potentially harmful, code or reveal sensitive information. This generated content is then written to a file, creating a critical exploit path. Implement strict sanitization or validation of user-controlled file content before including it in the LLM prompt. Consider using a structured input format for context that separates data from instructions, or employ prompt templating techniques that prevent instruction injection. Ideally, only extract specific, safe fields from `package.json` rather than injecting the entire file content. | LLM | src/index.ts:15 | |
| CRITICAL | Arbitrary File Write via User-Controlled Output Path The skill writes the LLM-generated configuration content to a file path specified directly by the user via the `--output` option (`options.output`). This allows an attacker to specify an arbitrary file path (e.g., `/etc/bash.bashrc`, `~/.bashrc`, `../../.env`) to overwrite critical system files or other sensitive files with potentially malicious JavaScript generated by the LLM (via prompt injection). This combination of prompt injection and arbitrary file write creates a severe command injection vulnerability. Restrict the `--output` path to a safe, designated directory (e.g., the current working directory or a specific output folder) and prevent path traversal (e.g., `../`). Validate the output filename to ensure it does not contain directory separators. Alternatively, prompt the user for confirmation before overwriting existing files, especially outside the current project directory. | LLM | src/cli.ts:15 | |
| HIGH | Data Exfiltration via User-Controlled File Read The skill reads the `package.json` file from a directory specified by the user (`dir`). While `package.json` typically does not contain highly sensitive data, an attacker could manipulate the `dir` argument to point to a location containing other sensitive files (e.g., `../../.env` if `package.json` were symlinked or if the skill were modified to read other files). The content of this file is then sent to the LLM, creating a potential vector for data exfiltration if the LLM can be prompted to reveal this information. Implement strict validation and sanitization of the `dir` argument to prevent path traversal. Restrict the `dir` to the current working directory or a designated project subdirectory. If reading other files is intended, explicitly define and validate those file paths rather than relying on user-controlled directory inputs. | LLM | src/index.ts:11 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/next-config-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/6ae463213755b168)
Powered by SkillShield