Security Audit
ui-ux-pro-max
github.com/nextlevelbuilder/ui-ux-pro-max-skillTrust Assessment
ui-ux-pro-max received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via Python Script Execution, Potential Data Exfiltration via Path Traversal in File Operations, Excessive Permissions Granted for File System Interaction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 6623f12b). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via Python Script Execution The skill explicitly instructs the LLM to execute a Python script (`python3 skills/ui-ux-pro-max/scripts/search.py`) with arguments directly derived from user input. If the user input is not properly sanitized or escaped before being passed to the shell command, a malicious user could inject arbitrary shell commands. For example, a user could provide input like `'; rm -rf /'` which, if not handled, would execute `rm -rf /` on the host system. Implement robust input validation and sanitization for all user-provided arguments passed to shell commands. Consider using a safer method for executing external processes, such as `subprocess.run` with `shell=False` and passing arguments as a list, or using a dedicated API if available, to prevent shell injection. Ensure the `search.py` script itself also validates and sanitizes its arguments. | LLM | SKILL.md:108 | |
| HIGH | Potential Data Exfiltration via Path Traversal in File Operations The skill instructs the LLM to create and read files based on user-controlled input, specifically the `--page` argument which determines the filename for page-specific design systems (e.g., `design-system/pages/dashboard.md`). If a malicious user provides a path traversal sequence (e.g., `../../../../etc/passwd`) as the page name, the LLM could be instructed to create or read arbitrary files outside the intended `design-system/pages/` directory, leading to data exfiltration or unauthorized file modification. Strictly validate and sanitize the `--page` argument to prevent path traversal. Ensure that the filename derived from user input does not contain directory separators (e.g., `/`, `\`) or special directory components (e.g., `..`). The underlying script (`search.py`) should also enforce this validation. | LLM | SKILL.md:128 | |
| MEDIUM | Excessive Permissions Granted for File System Interaction The skill grants the LLM the ability to execute arbitrary Python scripts and interact with the file system (create, write, and read files) based on user input. While intended for specific design system files, this broad capability, especially when combined with potential command injection and path traversal vulnerabilities, represents excessive permissions. An attacker exploiting these vulnerabilities could leverage these permissions to execute malicious code, exfiltrate sensitive data, or disrupt the system. Minimize the permissions granted to the LLM. If file system interaction is necessary, restrict it to specific, isolated directories and enforce strict validation on all file paths and content. Consider sandboxing the execution environment for external commands to limit potential damage from successful injections. | LLM | SKILL.md:108 |
Scan History
Embed Code
[](https://skillshield.io/report/cda31e1456f0ed16)
Powered by SkillShield