Trust Assessment
lazy-load-suggester received a trust score of 60/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Prompt Injection via Untrusted File Content, Data Exfiltration of Local Files to Third-Party LLM.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Prompt Injection via Untrusted File Content The skill reads arbitrary local file content and directly injects it into the 'user' message of an OpenAI API call. A malicious user can create a file (e.g., `malicious.js`) within the scanned directory containing prompt injection instructions. These instructions could manipulate the LLM's behavior, causing it to ignore its system prompt, reveal sensitive information, or perform unintended actions. Implement robust input sanitization and separation for LLM prompts. Instead of directly concatenating untrusted file content into the user message, consider using a structured input format (e.g., JSON) where file content is clearly delineated as data, not instructions. Alternatively, use a separate, isolated LLM call for processing untrusted content, or employ a 'sandwich' prompt defense with clear delimiters and instructions for the LLM to treat content within those delimiters as data. | LLM | src/index.ts:26 | |
| HIGH | Data Exfiltration of Local Files to Third-Party LLM The `scanComponents` function reads the content of local files (JS, TS, JSX, TSX, Vue, Svelte) from a user-specified directory. This content is then sent to the OpenAI API via the `analyzeLazyLoad` function. Combined with a prompt injection vulnerability, a malicious user could instruct the LLM to output the contents of these local files, leading to exfiltration of potentially sensitive data to the LLM provider and back to the attacker. Avoid sending arbitrary local file content directly to external LLM services, especially when the content source is untrusted. If file content must be analyzed, consider anonymizing or redacting sensitive information before transmission. Implement strict access controls on which directories and file types can be scanned. Mitigate prompt injection to prevent the LLM from being coerced into revealing the data it processed. | LLM | src/index.ts:16 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/lazy-load-suggester/package.json | |
| MEDIUM | Excessive File System Read Permissions The `scanComponents` function allows scanning of an arbitrary directory specified by the user (defaulting to the current directory). While it ignores `node_modules`, `dist`, and `.git` directories, it can still read any file matching the specified extensions (`.js`, `.ts`, etc.) within the provided path. This broad read access, especially if the user specifies a root or sensitive directory, increases the attack surface for data exfiltration when combined with prompt injection. Restrict the scope of file system access to only necessary directories. For example, enforce that the scanned directory must be a subdirectory of the current working directory or explicitly whitelist allowed paths. Provide clear warnings to users about the implications of scanning broad directories. Ensure that the file types being scanned are genuinely relevant to the skill's purpose and do not inadvertently include sensitive configuration files. | LLM | src/index.ts:11 |
Scan History
Embed Code
[](https://skillshield.io/report/7e8f77c002e8d09f)
Powered by SkillShield