Trust Assessment
cors-gen received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Direct User Input to LLM Prompt (Prompt Injection), Uncontrolled File Write to User-Specified Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Direct User Input to LLM Prompt (Prompt Injection) The user-provided `description` is directly interpolated into the LLM's user message without sanitization or validation. This allows a malicious user to craft input that could manipulate the LLM's behavior, override system instructions, or attempt to extract sensitive information from the LLM's context. The output of this potentially manipulated LLM is then written to a file, exacerbating the risk. Implement robust input validation and sanitization for the `description` argument. Consider using a more structured approach for passing user requirements to the LLM (e.g., JSON schema, dedicated tool inputs) rather than direct string interpolation. Ensure the LLM's system prompt is robust against adversarial inputs. | LLM | src/index.ts:10 | |
| HIGH | Uncontrolled File Write to User-Specified Path The skill allows writing the AI-generated CORS configuration to an arbitrary file path specified by the user via the `--output` option. A malicious user could exploit this to overwrite critical system files (e.g., `/etc/passwd`, `/etc/hosts`, configuration files) or sensitive user data with potentially malformed or malicious AI-generated content, leading to denial of service, system instability, or further compromise if the AI output can be coerced into executable code. Restrict the output file path to a predefined, safe directory (e.g., a temporary directory or a subdirectory within the skill's own scope). Implement strict validation and sanitization of the `options.output` path to prevent directory traversal attacks (e.g., `../`). Consider prompting the user for confirmation before overwriting existing files, especially outside a designated safe area. | LLM | src/cli.ts:15 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/cors-config-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/288c819556f36012)
Powered by SkillShield