Trust Assessment
cors-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Direct Prompt Injection via User Input, Arbitrary File Write via User-Controlled Output Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct Prompt Injection via User Input The user-provided 'description' argument is directly concatenated into the LLM's user message without any sanitization or structured prompting. This allows an attacker to inject malicious instructions into the prompt, potentially overriding the system prompt, extracting sensitive information, or causing the LLM to generate harmful content. Implement robust input sanitization for user-provided 'description'. Consider using structured prompting techniques (e.g., JSON schema for input, few-shot examples) or LLM guardrails/moderation APIs to prevent prompt injection. Avoid direct string concatenation of untrusted input into prompts. | LLM | src/index.ts:9 | |
| HIGH | Arbitrary File Write via User-Controlled Output Path The tool allows users to specify an arbitrary output file path via the `-o` or `--output` option. The `fs.writeFileSync` function then writes the LLM-generated content to this user-controlled path. A malicious actor could exploit this to overwrite critical system files (e.g., `/etc/passwd`, `~/.bashrc`), create executable scripts in sensitive locations, or write data to paths that could lead to data exfiltration or further system compromise. Restrict the output file path to a safe, designated directory (e.g., the current working directory or a specific subdirectory). Validate and sanitize the `options.output` value to prevent directory traversal attacks (e.g., `../../`). Consider prompting the user for confirmation if writing outside the current directory or to a potentially sensitive path. | LLM | src/cli.ts:14 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/cors-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/df106d4d43034cfb)
Powered by SkillShield