Trust Assessment
image-optimizer received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 0 high, 2 medium, and 1 low severity. Key findings include Unpinned npm dependency version, Prompt Injection via User-Controlled File Paths, Data Exfiltration of Absolute File Paths to Third-Party AI.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/image-optimize/package.json | |
| MEDIUM | Prompt Injection via User-Controlled File Paths The skill constructs a prompt for the OpenAI API that includes user-controlled file paths. If a file path contains text that could be interpreted as an instruction by the LLM (e.g., a file named 'ignore_previous_instructions.jpg'), it could lead to prompt injection, potentially manipulating the LLM's behavior or output. Although the system prompt aims to guide the LLM, direct inclusion of unsanitized user-controlled strings in the prompt is a known vulnerability. Sanitize user-controlled file paths before including them in the LLM prompt. Consider explicitly instructing the LLM to treat the file list as data only, or use a structured input format that separates data from instructions. For example, enclose file paths in XML tags or JSON objects within the prompt to clearly delineate them as data. | LLM | src/index.ts:50 | |
| LOW | Data Exfiltration of Absolute File Paths to Third-Party AI The skill sends absolute file paths of scanned images, along with their sizes and extensions, to the OpenAI API. While this data is necessary for the skill's core functionality (AI-powered optimization suggestions), sending absolute paths can reveal sensitive directory structures on the user's local machine to a third-party service. Users might not be aware that full, absolute paths are being transmitted, which could be a privacy concern. Inform users explicitly that absolute file paths will be sent to the AI service. Consider adding an option to anonymize paths (e.g., send only relative paths from the scanned directory, or hashes of paths) to enhance user privacy, especially for sensitive environments. Alternatively, provide a clear warning in the skill's documentation or CLI output. | LLM | src/index.ts:47 |
Scan History
Embed Code
[](https://skillshield.io/report/0962776a9275e8d3)
Powered by SkillShield