Trust Assessment
eslint-config-gen received a trust score of 59/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 2 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Unpinned npm dependency version, User-controlled input directly fed to LLM prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | User-controlled input directly fed to LLM prompt The skill constructs the LLM user message by directly concatenating content from the user's `package.json` and sampled code files. Malicious instructions embedded within these user-controlled files could attempt to manipulate the LLM's behavior, potentially leading to unintended actions or information disclosure. While the system prompt attempts to constrain the LLM's output, sophisticated injection techniques might bypass these safeguards. Implement robust input sanitization or validation for user-provided code and `package.json` content before feeding it to the LLM. Consider using a separate, more constrained LLM call to pre-process or validate user input. Reinforce system prompt instructions to strictly adhere to the task and ignore extraneous commands. | LLM | src/index.ts:30 | |
| HIGH | User's source code and package.json sent to OpenAI API The skill reads the user's `package.json` and samples from their source code files (`.ts`, `.js`, `.tsx`, `.jsx`) from the current working directory. This content is then transmitted to the OpenAI API for analysis. While this is the intended functionality of the skill, it constitutes data exfiltration of potentially sensitive intellectual property or proprietary code to a third-party service. The skill has broad read access to the project directory. Clearly and prominently disclose to the user that their code and `package.json` content will be sent to OpenAI. Provide information about OpenAI's data handling and privacy policies. Consider offering options for local-only analysis or allowing users to review/redact the data before transmission, if technically feasible. Limit the scope of file access to the absolute minimum required. | LLM | src/index.ts:29 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/lxgicstudios/eslint-config-gen/dist/index.js:23 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/eslint-config-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/08cfb5bc138186b8)
Powered by SkillShield