Trust Assessment
ui-ux-pro-max received a trust score of 30/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 9 findings: 1 critical, 1 high, 7 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Potential Shell Command Injection via User Input in Python script calls, Arbitrary File Write via `--output-dir` argument.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 41/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings9
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Shell Command Injection via User Input in Python script calls The skill instructs the host LLM to execute `python3 scripts/search.py` commands, where arguments like `query`, `project-name`, `page`, and `output-dir` are derived from user input. If the LLM does not properly quote or escape these user-controlled arguments when constructing the shell command, a malicious user could inject arbitrary shell commands. For example, a crafted `query` like `"my product; rm -rf /"` could lead to arbitrary command execution. Even if the LLM quotes the input, a user could craft input containing quotes to break out of the intended argument and inject new arguments or commands. The host LLM must ensure all user-provided arguments are strictly quoted and escaped for shell execution. Additionally, consider implementing a more robust input validation and sanitization mechanism within the Python script itself for arguments that could be used in file paths or command execution. | LLM | SKILL.md:108 | |
| HIGH | Arbitrary File Write via `--output-dir` argument The `scripts/search.py` script accepts a `--output-dir` argument which allows specifying an arbitrary directory for saving generated design system files. If a malicious user can control this argument, they could write files to sensitive locations on the filesystem, potentially overwriting critical system files or exfiltrating data by writing it to an accessible location. This constitutes an arbitrary file write vulnerability. Restrict the `--output-dir` argument to a predefined, safe directory or a subdirectory within the skill's own data directory. Implement strict validation to prevent path traversal sequences (e.g., `../`) and ensure the target directory is within an allowed sandbox. | LLM | scripts/search.py:40 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/wpank/ui-ux/scripts/core.py:4 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/wpank/ui-ux/scripts/design_system.py:11 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/wpank/ui-ux/scripts/design_system.py:820 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/wpank/ui-ux/scripts/design_system.py:916 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/wpank/ui-ux/scripts/search.py:12 | |
| MEDIUM | Path Traversal Vulnerability in `project-name` and `page` arguments The `project-name` and `page` arguments are used to construct file paths (e.g., `design-system/{project_slug}/MASTER.md`, `design-system/{project_slug}/pages/{page_filename}.md`) within the `persist_design_system` function. If these arguments contain path traversal sequences (e.g., `../`, `../../`), a malicious user could potentially write files outside the intended `design-system` directory, leading to arbitrary file creation or modification in other parts of the filesystem. Sanitize `project-name` and `page` inputs to remove or disallow path traversal sequences. Ensure that `project_slug` and `page_filename` are strictly alphanumeric or conform to safe filename patterns before being used in file path construction. | LLM | scripts/search.py:36 | |
| MEDIUM | Unpinned Dependency in `npx` installation command The installation instruction `npx clawhub@latest install ui-ux-pro-max` uses `@latest` for the `clawhub` package. This means the skill relies on the most recent version of `clawhub`, which introduces a supply chain risk. A malicious update to `clawhub` could automatically be pulled and executed, compromising the system without explicit user or LLM approval. Pin the `clawhub` dependency to a specific, known-good version (e.g., `npx clawhub@1.2.3 install ui-ux-pro-max`). Regularly review and update pinned versions after verifying their integrity. | LLM | SKILL.md:99 |
Scan History
Embed Code
[](https://skillshield.io/report/1918b8d97029d2ec)
Powered by SkillShield