Trust Assessment
ci-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 3 critical, 0 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Data Exfiltration to Third-Party LLM via File Contents, Prompt Injection via Untrusted File Contents.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 10/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Data Exfiltration to Third-Party LLM via File Contents The skill reads the contents of various local project configuration files (e.g., package.json, Dockerfile, vercel.json, etc.) and directly sends them to the OpenAI API as part of the user prompt. These files can contain sensitive information, proprietary configurations, or internal details that should not be shared with an external LLM service. Although there's a 5KB size limit per file, this is a weak mitigation against leaking sensitive data. Implement stricter filtering or redaction of file contents before sending them to the LLM. Provide explicit user consent mechanisms for data sharing. Clearly document what data is transmitted to external services. Consider using local models or anonymizing data where possible. | LLM | src/index.ts:60 | |
| CRITICAL | Prompt Injection via Untrusted File Contents The LLM's user prompt is constructed by directly embedding the contents of local project files, which are untrusted and user-controlled. An attacker could craft specific file contents (e.g., within a package.json description, a Dockerfile comment, or a vercel.json value) to inject malicious instructions into the LLM's prompt. This could manipulate the LLM's behavior, causing it to generate unintended or harmful output, potentially bypassing the system prompt's instructions (e.g., 'Output ONLY the YAML contents. No markdown code fences.'). Implement robust input sanitization and validation for all file contents before embedding them into the LLM prompt. Use structured data formats or API calls for sensitive prompt components instead of direct string concatenation. Employ LLM-specific prompt injection defenses such as instruction-following models or input/output validation. | LLM | src/index.ts:60 | |
| CRITICAL | Command Injection via Maliciously Generated CI/CD Workflow The skill generates CI/CD workflow YAML based on the LLM's output, which is then written to a file (`.github/workflows/ci.yml` by default) intended for execution by a CI system (e.g., GitHub Actions). If the LLM is successfully compromised via prompt injection (e.g., by malicious file contents), it could be coerced into generating arbitrary and harmful shell commands within the YAML. These commands would then be executed by the CI system when the workflow runs, leading to a severe command injection vulnerability on the CI runner. Implement strict validation and sanitization of the LLM's output to ensure it conforms to expected YAML structure and does not contain malicious commands. Consider using an allowlist approach for commands or actions within the generated YAML. This remediation is critical in conjunction with prompt injection mitigations. | LLM | src/cli.ts:50 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/ci-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/23eca6781d814fcf)
Powered by SkillShield