Trust Assessment
linux-gui-control received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 3 high, 0 medium, and 0 low severity. Key findings include Unquoted user input in shell command examples, Broad GUI and system control capabilities, Potential for sensitive data exposure via UI inspection and screenshots.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Unquoted user input in shell command examples The `SKILL.md` documentation provides examples of shell commands (`pkill <app>`, `nohup <app> ...`) where a placeholder for user-controlled input (`<app>`) is directly interpolated into a shell command without proper quoting. If an LLM generates commands following these patterns using untrusted user input, it creates a direct command injection vulnerability. An attacker could provide input like `my_app; rm -rf /` to execute arbitrary commands on the host system. Always quote user-controlled variables in shell commands to prevent command injection. For example, `pkill "$app_name"` and `nohup "$app_name" --force-renderer_accessibility > /dev/null 2>&1 &`. | LLM | SKILL.md:40 | |
| HIGH | Broad GUI and system control capabilities The skill utilizes powerful tools (`xdotool`, `wmctrl`, `dogtail`, `scrot`, `pkill`) that grant extensive control over the user's desktop environment. This includes simulating keyboard/mouse input, managing windows (activate, move, resize, close), terminating processes, and taking screenshots. While these capabilities are central to the skill's purpose, they pose a significant security risk. If the AI agent is compromised or misused, it could lead to unauthorized actions, data loss, privacy breaches (e.g., typing sensitive information, closing critical applications, capturing screenshots of confidential data), or system instability. Implement strict access controls and user consent mechanisms for actions involving sensitive GUI interactions. Ensure the LLM's reasoning and actions are auditable. Consider breaking down the skill into more granular, permission-scoped sub-skills if possible, or requiring explicit user confirmation for high-impact actions. | LLM | SKILL.md | |
| HIGH | Potential for sensitive data exposure via UI inspection and screenshots 1. **UI Tree Inspection:** The `scripts/inspect_ui.py` tool, using `dogtail`, can extract the entire UI hierarchy of any running application. This includes text content displayed within application windows, which could inadvertently expose sensitive information (e.g., personal data, credentials, confidential documents) if the agent is instructed to inspect an application containing such data.
2. **Screenshots:** The `scripts/gui_action.sh` tool, through `scrot`, can capture full-desktop screenshots. These images can contain highly sensitive visual information from any open application or desktop element. While the skill itself doesn't exfiltrate these files, it creates the data that could then be accessed and exfiltrated by other means. For UI inspection, implement filtering or redaction of potentially sensitive information. Require explicit user consent before inspecting applications known to handle sensitive data. For screenshots, minimize the scope of screenshots (e.g., specific windows instead of full desktop) and implement strict policies for handling and storing captured images. Require explicit user consent for taking screenshots, especially of the entire desktop. | LLM | scripts/inspect_ui.py:19 | |
| HIGH | Potential for sensitive data exposure via UI inspection and screenshots 1. **UI Tree Inspection:** The `scripts/inspect_ui.py` tool, using `dogtail`, can extract the entire UI hierarchy of any running application. This includes text content displayed within application windows, which could inadvertently expose sensitive information (e.g., personal data, credentials, confidential documents) if the agent is instructed to inspect an application containing such data.
2. **Screenshots:** The `scripts/gui_action.sh` tool, through `scrot`, can capture full-desktop screenshots. These images can contain highly sensitive visual information from any open application or desktop element. While the skill itself doesn't exfiltrate these files, it creates the data that could then be accessed and exfiltrated by other means. For UI inspection, implement filtering or redaction of potentially sensitive information. Require explicit user consent before inspecting applications known to handle sensitive data. For screenshots, minimize the scope of screenshots (e.g., specific windows instead of full desktop) and implement strict policies for handling and storing captured images. Require explicit user consent for taking screenshots, especially of the entire desktop. | LLM | scripts/gui_action.sh:28 |
Scan History
Embed Code
[](https://skillshield.io/report/f55163f3dc2b14e0)
Powered by SkillShield