Trust Assessment
grok-imagine received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Arbitrary File Read and Exfiltration, Arbitrary File Write via Output Directory, HTML Injection (XSS) in Generated Output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary File Read and Exfiltration The skill allows reading arbitrary files from the filesystem via the `--input` argument. The content of the specified file is then base64 encoded and sent to the xAI API (https://api.x.ai/v1/images/edits) as part of the image editing request. This constitutes a severe data exfiltration vulnerability, as an attacker could use this to read sensitive files (e.g., `/etc/passwd`, `.ssh/id_rsa`, configuration files) and transmit their contents to a third-party service. Restrict the `--input` argument to a safe, temporary, or user-approved directory. Implement strict path validation to prevent traversal attacks (e.g., `../`). Consider sandboxing file access or requiring explicit user confirmation for reading files outside a designated skill directory. | LLM | scripts/gen.mjs:40 | |
| HIGH | Arbitrary File Write via Output Directory The skill allows specifying an arbitrary output directory via the `--out-dir` argument. This means generated images, `prompts.json`, and `index.html` can be written to any location on the filesystem where the Node.js process has write permissions. This could lead to overwriting critical system files, filling up disk space in sensitive areas, or writing malicious content to web server roots, potentially leading to further compromise or denial of service. Restrict the `--out-dir` argument to a dedicated, sandboxed output directory. Enforce a specific prefix/suffix for the path or validate that the path is within an allowed directory structure. Avoid using `resolve()` directly on user-provided paths without prior sanitization. | LLM | scripts/gen.mjs:109 | |
| MEDIUM | HTML Injection (XSS) in Generated Output The user-provided `prompt` is directly embedded into the `figcaption` element of the generated `index.html` file without proper HTML escaping. If a malicious prompt containing HTML or JavaScript (e.g., `<script>alert('XSS')</script>`) is provided, it would be executed when the `index.html` file is viewed in a web browser. This is a client-side cross-site scripting (XSS) vulnerability. HTML escape the `it.prompt` value before embedding it into the `index.html` file. For example, use a utility function to convert characters like `<`, `>`, `&`, `'`, `"` to their respective HTML entities. | LLM | scripts/gen.mjs:70 |
Scan History
Embed Code
[](https://skillshield.io/report/775f5edeef87153f)
Powered by SkillShield