Trust Assessment
nginx-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Direct User Input in LLM Prompt, Arbitrary File Write with LLM-Generated Content.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct User Input in LLM Prompt The user-provided 'description' is directly embedded into the 'content' field of the user message sent to the OpenAI API without any sanitization or validation. This creates a classic prompt injection vulnerability, allowing a malicious user to craft the 'description' to override the system prompt, manipulate the LLM's behavior, or elicit unintended responses. For example, a user could instruct the LLM to ignore its role as an Nginx expert and generate arbitrary code or text, which could then be written to a file. Implement robust input sanitization or use a more secure prompting technique (e.g., few-shot examples, input validation, or a separate moderation layer) to prevent user input from directly manipulating the LLM's instructions. Consider using a structured input format for the LLM if possible, or at least escaping special characters that could break out of the intended prompt structure. | LLM | src/index.ts:19 | |
| HIGH | Arbitrary File Write with LLM-Generated Content The skill allows users to specify an output file path via the '-o, --output <file>' option, where the LLM-generated Nginx configuration will be written. When combined with the prompt injection vulnerability (SS-LLM-001), an attacker could craft a malicious 'description' to make the LLM generate arbitrary code (e.g., a shell script, a cron job entry, or a malicious configuration file). This content could then be written to any file path accessible by the process running the skill, potentially leading to command injection, privilege escalation, or system compromise by overwriting sensitive system files or injecting malicious scripts. 1. Address the Prompt Injection vulnerability (SS-LLM-001) to prevent the LLM from generating arbitrary, non-Nginx content. 2. Restrict file write paths: If arbitrary file writing is necessary, restrict the output directory to a safe, non-system-critical location (e.g., a dedicated temporary directory or a user-specific output folder). Do not allow writing to absolute paths or paths outside a designated safe zone. Validate and sanitize the 'options.output' path to prevent path traversal attacks (e.g., '..', absolute paths). 3. Implement content validation: Before writing to disk, parse and validate the generated Nginx configuration to ensure it only contains valid Nginx directives and does not include any executable code or malicious constructs. | LLM | src/cli.ts:20 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/nginx-config-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/bbe9db83839bab71)
Powered by SkillShield