Trust Assessment
openai-image-cli received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 2 medium, and 0 low severity. Key findings include OpenAI API Key exposure via `config get` command, Broad file system read/write access, Unpinned `npm` dependency for `openai-image-cli`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 56/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | OpenAI API Key exposure via `config get` command The skill's documentation indicates that the `openai-image config get api-key` command can retrieve and display the `OPENAI_API_KEY`. An attacker could craft a prompt to trick the LLM into executing this command and subsequently exfiltrating the sensitive API key. Implement stricter access controls for the `config get api-key` command within the `openai-image` tool, or ensure the LLM's execution environment prevents it from outputting sensitive configuration values. Ideally, the tool should not allow programmatic retrieval of the API key once set, or require additional authentication. | LLM | SKILL.md:100 | |
| HIGH | Unpinned `npm` dependency for `openai-image-cli` The skill's installation instructions recommend `npm install -g @versatly/openai-image-cli` without specifying a version. This means that future installations could pull any version of the package, including potentially compromised or malicious versions. This introduces a significant supply chain risk, as a malicious update to the `npm` package could directly impact the security of the skill. Pin the dependency to a specific, known-good version (e.g., `npm install -g @versatly/openai-image-cli@1.0.0`). Consider vendoring critical dependencies or using a lock file to ensure deterministic installations. | LLM | SKILL.md:12 | |
| MEDIUM | Broad file system read/write access The `openai-image` CLI tool, as described, has extensive capabilities to read and write files on the local filesystem (e.g., `edit`, `vary`, `batch` commands for input, and `-o`/`--output` for output). While necessary for its core functionality, this broad access could be abused by a malicious prompt to read sensitive files or write to critical system locations, potentially leading to data exfiltration or system compromise. Implement sandboxing or restrict the filesystem scope available to the `openai-image` binary when executed by the LLM. Validate all file paths provided by the LLM to ensure they are within an allowed directory. | LLM | SKILL.md:57 | |
| MEDIUM | Potential command injection via user-controlled arguments The `openai-image` CLI tool accepts various user-controlled string arguments, such as prompts (`openai-image generate "prompt"`), instructions (`openai-image edit photo.png "instructions"`), and file paths. If the underlying `openai-image` binary does not properly sanitize these inputs before passing them to internal shell commands or system calls, it could be vulnerable to command injection. An attacker could craft a malicious prompt to execute arbitrary commands on the host system. The `openai-image` binary should rigorously sanitize all user-provided inputs before using them in any shell or system calls. If possible, avoid using `shell=True` in `subprocess` calls and pass arguments as a list. The LLM should also be instructed to validate and sanitize user inputs before constructing CLI commands. | LLM | SKILL.md:25 |
Scan History
Embed Code
[](https://skillshield.io/report/d8fbe2b4b7355d6f)
Powered by SkillShield