Trust Assessment
vercel-config-gen received a trust score of 68/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 1 high, 3 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Data Exfiltration via Local File Upload to LLM, Excessive File System Permissions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Data Exfiltration via Local File Upload to LLM The skill reads the content of local project configuration files (e.g., package.json, next.config.js) from the current working directory and sends them directly to the OpenAI API as part of the user prompt. This can lead to the exfiltration of sensitive information contained within these files (e.g., private dependencies, internal configurations, environment variable placeholders, or even API keys if present in config files) to a third-party service (OpenAI). The `slice(0, 3000)` limits the amount of data, but does not prevent exfiltration. Avoid sending raw file contents to external APIs. Instead, extract only necessary, non-sensitive metadata from configuration files. If sending file content is essential, implement explicit user consent mechanisms and robust sanitization to remove any potentially sensitive data before transmission. Consider hashing or anonymizing data where possible. | LLM | src/index.ts:13 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/vercel-config-gen/package.json | |
| MEDIUM | Excessive File System Permissions The skill uses `process.cwd()` and `fs.readFileSync` to read configuration files from the current working directory. While it targets specific file names, the broad scope of `process.cwd()` means it can access any file within the directory where the user executes the command, subject to OS permissions. This grants the skill more access than strictly necessary, increasing the attack surface for data exfiltration or unintended file access if combined with other vulnerabilities. Limit file system access to the absolute minimum required. If specific configuration files are needed, consider prompting the user for their location or using more constrained file access methods. Avoid reading arbitrary files from `process.cwd()` without explicit user interaction or strict validation. | LLM | src/index.ts:10 | |
| MEDIUM | Prompt Injection via Untrusted Local File Content The skill constructs the 'user' message for the OpenAI API by including content from local project configuration files (e.g., package.json, next.config.js). If these local files contain specially crafted strings or instructions, an attacker could potentially manipulate the behavior of the underlying LLM (gpt-4o-mini). This could lead to the LLM generating malicious or unintended `vercel.json` configurations, revealing internal information, or deviating from its intended purpose. Implement robust input sanitization and validation for any user-controlled or untrusted content before it is passed to the LLM. Consider using a more specific and restrictive system prompt to better anchor the LLM's behavior and make it less susceptible to adversarial instructions embedded in the user content. Avoid sending raw, unvalidated file contents to the LLM. | LLM | src/index.ts:20 |
Scan History
Embed Code
[](https://skillshield.io/report/995fd1d95f88dd93)
Powered by SkillShield