Trust Assessment
ci-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Prompt Injection via User-Controlled File Contents, Arbitrary File Write via User-Controlled Output Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User-Controlled File Contents The `generateWorkflow` function directly embeds the contents of user-controlled project configuration files (`projectInfo.contents`) into the LLM prompt. A malicious actor could craft a `package.json`, `Dockerfile`, or other scanned config file to include prompt injection instructions. These instructions could manipulate the LLM to ignore its system prompt (e.g., 'Output ONLY the YAML contents') and instead perform unauthorized actions, such as exfiltrating the `OPENAI_API_KEY` (which is available to the LLM in its context), generating malicious code, or revealing internal system information. The system prompt's attempt to constrain output is insufficient against sophisticated prompt injection. Implement robust sanitization or a structured input mechanism for user-controlled file contents before they are included in the LLM prompt. Consider using a separate, isolated LLM call for sensitive operations or employing techniques like input validation, content filtering, or a 'sandwich' prompt defense. Ensure that the LLM's access to sensitive environment variables like `OPENAI_API_KEY` is strictly controlled and not implicitly exposed through prompt context. | LLM | src/index.ts:50 | |
| HIGH | Arbitrary File Write via User-Controlled Output Path The `ai-ci` CLI tool allows users to specify an arbitrary output path for the generated workflow using the `--output` or `-o` option. The `fs.writeFileSync(outPath, workflow + "\n")` call in `src/cli.ts` directly uses this user-controlled `outPath` without sufficient validation or sanitization. This vulnerability allows an attacker to overwrite or create files in arbitrary locations on the file system where the command is executed. This could lead to denial of service (by overwriting critical system files), privilege escalation (by overwriting configuration files like `~/.bashrc` or `~/.ssh/authorized_keys`), or the creation of malicious scripts in executable paths. Restrict the output path to a safe, designated directory (e.g., a subdirectory within the project). Validate user-provided paths to ensure they do not contain directory traversal sequences (e.g., `../`) and do not point to sensitive system locations. Consider using a file picker or requiring explicit confirmation for writes outside the project directory. | LLM | src/cli.ts:40 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/ci-config-gen/package.json | |
| MEDIUM | Data Exfiltration to Third-Party LLM via Config File Upload The `scanProject` function reads the contents of various project configuration files (e.g., `package.json`, `Dockerfile`, `vercel.json`) up to 5KB each. These contents are then directly transmitted to the OpenAI API as part of the prompt in `generateWorkflow`. While this is the intended functionality for generating CI/CD workflows, it poses a data exfiltration risk. Sensitive information (e.g., internal API keys, database credentials, proprietary build steps, internal network configurations) present in these config files will be sent to a third-party LLM provider (OpenAI). Users might not fully understand the privacy implications of sharing potentially sensitive project configuration details with an external AI service. Clearly inform users about the specific data being sent to the LLM and its implications. Provide options for users to redact sensitive information from config files before processing or to exclude certain files/sections. Implement client-side scanning to identify and warn about common patterns of sensitive data before transmission. Consider local processing for highly sensitive data or using a privacy-preserving LLM solution. | LLM | src/index.ts:24 | |
| INFO | Unpinned Dependencies in package.json The `package.json` file uses caret (`^`) ranges for dependencies (e.g., `"openai": "^4.52.0"`). While `package-lock.json` pins exact versions, a fresh install without the lock file, or if the lock file is ignored, could pull in newer versions of these packages. This introduces a minor supply chain risk, as a new version might contain vulnerabilities or malicious code not present in the version tested during development. Consider using exact version pinning (e.g., `"openai": "4.52.0"`) in `package.json` for production builds to ensure deterministic dependency resolution and reduce the risk of unexpected changes or vulnerabilities introduced by new package versions. Regularly audit and update dependencies. | LLM | package.json:26 |
Scan History
Embed Code
[](https://skillshield.io/report/cd65d00cb1c5f291)
Powered by SkillShield