Trust Assessment
openai-image-gen received a trust score of 43/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unsafe environment variable passthrough, Credential harvesting, Suspicious import: urllib.request.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/steipete/openai-image-gen/scripts/gen.py:167 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/steipete/openai-image-gen/scripts/gen.py:167 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/steipete/openai-image-gen/scripts/gen.py:13 | |
| MEDIUM | Potential Cross-Site Scripting (XSS) in generated HTML The `_write_index` function writes user-controlled prompt strings directly into an HTML `<pre>` tag within the generated `index.html` file without proper HTML-escaping. If the AI agent is prompted to generate a malicious string (e.g., containing `<script>alert('XSS')</script>`), this content would be rendered in the local `index.html` file. While `<pre>` tags offer some protection by displaying content literally, they are not a foolproof defense against all forms of HTML injection, and a crafted payload could still lead to XSS when the user opens the local file. HTML-escape the `it['prompt']` string before inserting it into the HTML. Use `html.escape(it['prompt'])` from Python's `html` module to ensure any special HTML characters are converted to their entity equivalents, preventing browser interpretation as active content. | LLM | scripts/gen.py:168 |
Scan History
Embed Code
[](https://skillshield.io/report/f68926fc4f93175d)
Powered by SkillShield