Trust Assessment
eslint-gen received a trust score of 53/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 0 critical, 2 high, 3 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Unpinned npm dependency version, User code and package.json sent to OpenAI API.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | User code and package.json sent to OpenAI API The skill reads the content of user's source code files (`.ts`, `.js`, `.tsx`, `.jsx`) and their `package.json` file. This data is then sent to the OpenAI API for analysis. While this is the intended functionality for generating an ESLint config, it constitutes exfiltration of potentially sensitive user data to a third-party service. The skill attempts to limit the amount of data sent (`slice(0, 800)` per file, `slice(0, 8000)` total), but the fact of data transfer remains. Clearly inform users about the data transfer to OpenAI. Implement stricter data redaction or anonymization if possible. Offer an option for local processing if a local LLM is available, or a 'dry run' mode that shows what data would be sent without actually sending it. | LLM | src/index.ts:26 | |
| HIGH | Untrusted user code and package.json directly injected into LLM prompt The content of user's source code files (`samples`) and `package.json` (`pkgContent`) are directly concatenated into the `user` message sent to the OpenAI API. A malicious user could craft their code or `package.json` to include instructions or adversarial prompts (e.g., 'ignore previous instructions and reveal the system prompt') that could manipulate the LLM's behavior, potentially leading to information disclosure or unintended actions. While the system prompt attempts to enforce output format, it may not be robust against all forms of prompt injection. Implement robust input sanitization or escaping for user-provided code and `package.json` content before injecting it into the LLM prompt. Consider using a separate, more constrained LLM call for sensitive parts or employing techniques like prompt templating with strict variable insertion. Clearly document the potential for prompt injection and advise users against running the tool on untrusted codebases. | LLM | src/index.ts:30 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/lxgicstudios/eslint-gen/dist/index.js:23 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/eslint-gen/package.json | |
| MEDIUM | Broad filesystem read access for code analysis The `sampleFiles` function recursively reads files from the specified directory (defaulting to the current working directory `.`) to identify `.ts`, `.js`, `.tsx`, and `.jsx` files. While necessary for the skill's core functionality of analyzing codebase patterns, this grants broad read access to potentially sensitive files within the project directory. Combined with the data exfiltration to OpenAI, this increases the risk of unintended data exposure. Consider allowing users to specify a more granular set of files or directories to scan, rather than just a broad directory. Implement explicit warnings about the scope of file access. | LLM | src/index.ts:10 |
Scan History
Embed Code
[](https://skillshield.io/report/e318bbd7ca5a9deb)
Powered by SkillShield