Trust Assessment
index-suggester received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Direct Prompt Injection via User File Content, Potential Data Exfiltration via LLM Response.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct Prompt Injection via User File Content The skill directly incorporates user-provided file content (`queryCode`) into the `user` message of the OpenAI API call without any sanitization, escaping, or clear separation from instructions. This allows an attacker to craft malicious input within their query files (e.g., SQL comments, string literals) that can override the system prompt and manipulate the LLM's behavior, leading to unintended actions, information disclosure, or denial of service. Implement robust input sanitization and escaping for `queryCode` before it is sent to the LLM. Consider using techniques like: 1. Strict Delimiters: Wrap user input in clear, unambiguous delimiters that the LLM is instructed to treat as literal data. 2. Instruction/Data Separation: Clearly separate instructions from user data in the prompt structure. 3. LLM Guardrails: Implement output parsing and validation to ensure the LLM's response adheres to expected formats and content. 4. Least Privilege: Restrict the LLM's capabilities and access to external tools or information. | LLM | src/index.ts:30 | |
| HIGH | Potential Data Exfiltration via LLM Response The skill reads the content of user-specified files (potentially multiple `.ts`, `.js`, `.sql` files from a directory) and sends them to an external LLM (OpenAI). Due to the direct prompt injection vulnerability, an attacker can embed instructions within their query files to compel the LLM to extract and return sensitive information from these processed files, effectively exfiltrating data that the user running the tool has access to. Address the underlying prompt injection vulnerability (SS-LLM-001). Additionally, consider: 1. Content Filtering: Implement client-side filtering or redaction of potentially sensitive patterns (e.g., API keys, PII) from `queryCode` before sending it to the LLM. 2. LLM Output Validation: Strictly validate and filter the LLM's output to prevent it from returning unexpected or sensitive information. 3. Scope Limitation: If possible, limit the types of files or directories the tool can access to only those strictly necessary for its function. | LLM | src/index.ts:14 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/index-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/2410763ec96515c4)
Powered by SkillShield