Trust Assessment
nginx-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Arbitrary file write via user-controlled output path, User input directly injected into LLM prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary file write via user-controlled output path The skill allows users to specify an arbitrary file path via the `--output` option in `src/cli.ts`. The content written to this file is the Nginx configuration generated by the LLM, which is also influenced by user input. This combination allows a malicious actor to potentially overwrite critical system files or write malicious configurations to sensitive locations (e.g., `/etc/nginx/nginx.conf`, `/etc/crontab`, `/root/.ssh/authorized_keys`) if the process running the skill has the necessary write permissions. This could lead to denial of service, privilege escalation, or arbitrary code execution. Restrict the output path to a safe, designated directory (e.g., a temporary directory or a user-specific output folder). Validate and sanitize the `options.output` path to prevent directory traversal attacks (e.g., `../../`). Consider requiring explicit confirmation before overwriting existing files. | LLM | src/cli.ts:20 | |
| HIGH | User input directly injected into LLM prompt The user's description (`options.description`) is directly concatenated into the LLM's user message in `src/index.ts`. While the system prompt attempts to guide the LLM, a sophisticated prompt injection attack could potentially bypass these instructions, causing the LLM to generate unintended or malicious Nginx configurations. Given that the generated configuration can then be written to an arbitrary file (as identified in another finding), a successful prompt injection could lead to severe consequences, including arbitrary code execution or system compromise. Implement robust input sanitization and validation for `options.description`. Consider using techniques like prompt templating with strict variable insertion, or employing an LLM guardrail to detect and reject malicious prompts. If possible, limit the scope of what the LLM can generate (e.g., disallow certain directives or patterns in the output). | LLM | src/index.ts:18 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/nginx-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/82e774ac8de4417e)
Powered by SkillShield