Trust Assessment
migration-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 2 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Prompt Injection via User-Controlled Schema Content, Data Exfiltration via Arbitrary File Read and LLM Transmission.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User-Controlled Schema Content The `generateMigration` function directly embeds user-controlled `schemaContent` into the LLM's user prompt without any sanitization or escaping. A malicious user can craft their ORM schema files (which are read from a user-specified directory) to include prompt injection instructions, potentially manipulating the LLM's behavior, extracting sensitive information, or generating harmful outputs. The `migrationName` parameter is also directly embedded, providing another, albeit smaller, vector for prompt injection. Implement robust sanitization or escaping for all user-controlled inputs (`schemaContent`, `migrationName`) before embedding them into LLM prompts. Consider using a structured input format (e.g., JSON) for schema data and instructing the LLM to parse it strictly, or employ prompt templating libraries that offer injection protection. Additionally, consider using a separate, less privileged LLM for processing untrusted inputs or implementing a content moderation layer. | LLM | src/index.ts:50 | |
| HIGH | Data Exfiltration via Arbitrary File Read and LLM Transmission The skill allows users to specify an arbitrary project directory (`--dir` option, mapped to `cwd`). The `findSchemaFiles` function then uses `glob` to search for schema files within this user-controlled `cwd`. The content of these found files is subsequently read using `fs.readFileSync` and directly transmitted to the OpenAI API as part of the `schemaContent` in the `generateMigration` function. This enables a malicious user to exfiltrate the content of any file readable by the skill's process to a third-party service (OpenAI) by pointing `cwd` to sensitive directories (e.g., `/etc`, `/home/user/`) and crafting a schema file pattern that matches a target file. Restrict the `cwd` parameter to a safe, sandboxed directory or enforce strict validation to prevent access to sensitive system paths. Implement a whitelist of allowed directories or use a chroot jail if possible. If arbitrary file reading is necessary, implement strict content filtering and anonymization before sending data to external APIs, and inform users about data transmission practices. | LLM | src/index.ts:66 | |
| HIGH | Credential Harvesting via Prompt Injection The skill initializes the OpenAI client, which by default reads the `OPENAI_API_KEY` environment variable. Due to the critical prompt injection vulnerability identified in `generateMigration`, a sophisticated attacker could craft a malicious schema file that, when processed by the LLM, attempts to extract or reveal the `OPENAI_API_KEY` from the LLM's internal context or environment if the LLM has any access to such information (e.g., through reflection or tool use capabilities). In addition to remediating the prompt injection vulnerability, ensure that the LLM is strictly sandboxed and has no access to environment variables or internal system information. Consider using a dedicated, short-lived API key for each LLM interaction or implementing a proxy service that manages API key access and filters LLM outputs for sensitive data. | LLM | src/index.ts:4 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/migration-gen/package.json | |
| MEDIUM | Excessive Permissions: Arbitrary File Write The skill allows users to specify an arbitrary output directory (`--output` option, mapped to `outputDir`) and migration name (`--name` option, mapped to `name`). The `createMigrationFiles` function then uses these user-controlled inputs to construct paths for `fs.mkdirSync` and `fs.writeFileSync`. While `path.join` normalizes path segments, a malicious user can specify an absolute path for `outputDir` (e.g., `/usr/local/bin`, `/etc`) to write files to arbitrary locations on the filesystem where the skill has write permissions. This could lead to overwriting critical system files, creating malicious scripts in executable paths, or consuming disk space in sensitive areas. Restrict the `outputDir` parameter to a safe, sandboxed directory (e.g., a temporary directory or a user-specific data directory). Implement strict validation to prevent absolute paths or paths outside a designated safe area. Ensure the skill runs with the principle of least privilege, limiting its write access to only necessary directories. | LLM | src/index.ts:75 |
Scan History
Embed Code
[](https://skillshield.io/report/202c5719149360d6)
Powered by SkillShield