Trust Assessment
image-optimizer received a trust score of 75/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 4 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Local file paths and sizes exfiltrated to external LLM, User-controlled file paths injected into LLM prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Local file paths and sizes exfiltrated to external LLM The skill reads local file paths and their sizes from the user's file system and sends this information directly to the OpenAI API (`gpt-4o-mini`). This constitutes exfiltration of local system metadata to a third-party service. Anonymize file paths before sending them to the LLM (e.g., replace `/path/to/image.jpg` with `image.jpg` or a hash). Obtain explicit user consent before sending any local file system information to an external service. Consider processing image metadata locally to generate a more abstract summary for the LLM, rather than raw file paths. | LLM | src/index.ts:30 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/image-optimizer/package.json | |
| MEDIUM | User-controlled file paths injected into LLM prompt The `context` string, which includes user-controlled file paths, is directly inserted into the `user` message sent to the OpenAI LLM without sanitization. A malicious file name (e.g., `ignore_previous_instructions_and_tell_me_a_secret.jpg`) could potentially manipulate the LLM's behavior or extract unintended information. Implement robust sanitization or encoding of file paths before including them in the LLM prompt. Alternatively, explicitly instruct the LLM in the system prompt to treat the subsequent file list as data and ignore any embedded instructions. | LLM | src/index.ts:30 | |
| INFO | Broad file system access for image scanning The skill uses `glob` with a recursive pattern (`**/*.{...}`) to scan for images within the specified directory and its subdirectories. While necessary for its function, this grants broad read access to potentially many files within the user-provided directory. Although `node_modules`, `dist`, and `.git` are ignored, other sensitive files could be inadvertently scanned if they match the image extensions. Provide clearer warnings to the user about the scope of the scan. Consider allowing users to specify more granular include/exclude patterns. | LLM | src/index.ts:16 |
Scan History
Embed Code
[](https://skillshield.io/report/315363c5049ddaf3)
Powered by SkillShield