Trust Assessment
creative-illustration received a trust score of 12/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 7 findings: 1 critical, 3 high, 2 medium, and 1 low severity. Key findings include Unsafe environment variable passthrough, Credential harvesting, Suspicious import: urllib.request.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/hhhh124hhhh/creative-illustration/scripts/illustrate.py:272 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/hhhh124hhhh/creative-illustration/scripts/illustrate.py:272 | |
| HIGH | Cross-Site Scripting (XSS) in generated HTML gallery The `scripts/illustrate.py` script generates an `index.html` file that includes user-provided prompt text (from `--subject` or `--prompt` arguments) directly into HTML attributes (`alt`) and element content (`<p>`) without proper HTML escaping. A malicious prompt containing HTML tags or attribute-breaking characters (like double quotes) could lead to Cross-Site Scripting (XSS) when the `index.html` file is viewed in a web browser. This could allow an attacker to execute arbitrary JavaScript in the context of the local file, potentially leading to data exfiltration (e.g., reading local files via `file://` protocol, if browser security allows) or other client-side attacks. Before inserting user-controlled `entry['prompt']` into the HTML, it must be properly HTML-escaped. Import the `html` module and use `html.escape()` for all user-controlled strings embedded in HTML. For example: `alt="{html.escape(entry['prompt'])}"` and `<p><strong>Prompt:</strong> {html.escape(entry['prompt'])}</p>`. | LLM | scripts/illustrate.py:306 | |
| HIGH | OpenAI API key can be exfiltrated via compromised `OPENAI_BASE_URL` environment variable The `OPENAI_API_KEY` is retrieved from environment variables (`OPENAI_API_KEY`) or a command-line argument (`--api-key`) and is used to authenticate requests to the OpenAI API. The base URL for this API endpoint is determined by `OPENAI_BASE_URL` or `OPENAI_API_BASE` environment variables, falling back to `https://api.openai.com`. If an attacker can compromise or maliciously set these environment variables in the agent's execution environment, the `OPENAI_API_KEY` could be sent to an attacker-controlled server instead of the legitimate OpenAI endpoint, leading to direct credential exfiltration. Ensure that the agent's environment variables are securely managed and not susceptible to manipulation by untrusted input. If the agent operates in an environment where `OPENAI_BASE_URL` could be user-controlled, implement strict validation or whitelisting for this variable to only allow trusted OpenAI endpoints. | LLM | scripts/illustrate.py:100 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/hhhh124hhhh/creative-illustration/scripts/illustrate.py:13 | |
| MEDIUM | Arbitrary output directory allows writing to sensitive filesystem locations The skill allows users to specify an arbitrary output directory via the `--out-dir` argument. While filenames within this directory are sanitized, the base directory itself is not restricted. If the agent's execution environment has write permissions to sensitive system paths (e.g., `/etc`, `/usr/local/bin`, or other user directories), an attacker could specify such a path. This could lead to denial of service (filling up disk space), overwriting critical files, or placing malicious files in unexpected locations, potentially aiding in privilege escalation or system compromise. Implement a whitelist or strict validation for the `--out-dir` argument, restricting it to a safe, designated output directory (e.g., a subdirectory within the skill's own temporary space or a user-specific sandbox). Alternatively, ensure the agent runs with minimal necessary file system permissions. | LLM | scripts/illustrate.py:260 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/hhhh124hhhh/creative-illustration/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/fb487195540c5699)
Powered by SkillShield