Trust Assessment
windows-control received a trust score of 62/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 12 findings: 7 critical, 2 high, 2 medium, and 1 low severity. Key findings include Node lockfile missing, Full Desktop Control Grants Excessive Permissions, Arbitrary Text Input Leads to Command Injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings12
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Full Desktop Control Grants Excessive Permissions The skill is designed to provide full control over the Windows desktop, including mouse, keyboard, and screen interaction. This grants the AI agent highly privileged access to the host system, allowing it to perform any action a human user can, bypassing typical sandboxing and security boundaries. This level of access inherently poses a critical security risk. Implement strict human-in-the-loop approval for all actions, especially those involving typing, clicking, or reading sensitive information. Restrict the skill's execution context to a sandboxed environment if possible. Clearly communicate the high-risk nature of this skill to users and ensure robust guardrails are in place. | LLM | SKILL.md:1 | |
| CRITICAL | Arbitrary Text Input Leads to Command Injection The `type_text.py` script directly takes user-provided text from `sys.argv` and simulates typing it on the system using `pyautogui.write`. If the AI agent is instructed to type malicious commands into an active terminal, a 'Run' dialog (e.g., opened by `key_press.py`), or any application that processes text as commands, it can lead to arbitrary command execution on the host system. Implement strict sanitization and validation of input text before it is typed. Require explicit user confirmation for typing into sensitive applications (e.g., terminals, system dialogs). Consider limiting the characters that can be typed or disallowing typing into specific application contexts. | LLM | scripts/type_text.py:15 | |
| CRITICAL | Arbitrary Key Press Combinations Lead to Command Injection The `key_press.py` script allows the AI agent to simulate any key press or key combination (e.g., 'win+r', 'ctrl+s') using `pyautogui.hotkey` or `pyautogui.press`. This can be exploited to open system utilities (like the 'Run' dialog), execute system commands, or perform destructive actions (e.g., closing applications, saving files without confirmation). When combined with text input, it forms a direct path to arbitrary command injection. Implement strict validation for allowed key combinations. Require explicit user confirmation for sensitive key presses. Consider disallowing system-level key combinations that can launch arbitrary programs or modify system settings. | LLM | scripts/key_press.py:17 | |
| CRITICAL | Typing into Dialog Fields Can Lead to Command Injection The `handle_dialog.py` script's `type_in_field` function allows the AI agent to input arbitrary text into any editable field within a system dialog. This poses a critical command injection risk if the dialog is a 'Run' prompt, a file save/open dialog where malicious paths or executable filenames can be entered, or any other dialog that interprets user input as commands. Implement strict sanitization and validation of text input for dialogs. Require explicit user confirmation before typing into sensitive dialog fields, especially those related to file system operations or command execution. | LLM | scripts/handle_dialog.py:208 | |
| CRITICAL | Full Screen Capture Leads to Data Exfiltration The `screenshot.py` script captures the entire screen using `pyautogui.screenshot()` and encodes it as a base64 PNG. This allows the AI agent to exfiltrate any visual information currently displayed on the user's monitor, including sensitive documents, personal data, or credentials. Implement strict human-in-the-loop approval for all screenshot requests. Consider redacting sensitive areas of the screen or limiting screenshots to specific, non-sensitive application windows. | LLM | scripts/screenshot.py:9 | |
| CRITICAL | Reading Window Content Leads to Data Exfiltration The `read_window.py` script extracts all accessible text content from a specified application window using `pywinauto`. This poses a critical data exfiltration risk as the AI agent can read and potentially transmit sensitive information from documents, emails, chat applications, or any other open window, including credentials or confidential data. Implement strict policies on which windows can be read. Require explicit user confirmation before reading content from sensitive applications. Redact or filter out potentially sensitive information from the extracted text. | LLM | scripts/read_window.py:40 | |
| CRITICAL | Reading Webpage Content Leads to Data Exfiltration The `read_webpage.py` script extracts comprehensive content from browser windows, including text, headings, buttons, links, and input field values, using `pywinauto`. This presents a critical data exfiltration risk, enabling the AI agent to harvest sensitive information from web applications, such as credentials, financial data, or confidential documents displayed in a browser. Implement strict policies on which browser content can be read. Require explicit user confirmation before reading content from sensitive websites or web applications. Redact or filter out potentially sensitive information from the extracted content. | LLM | scripts/read_webpage.py:66 | |
| HIGH | OCR of Screen Region Can Exfiltrate Sensitive Data If Tesseract OCR is installed, the `read_region.py` script can extract text from any defined rectangular area of the screen. This allows the AI agent to exfiltrate sensitive information that is visually present but not directly accessible via UI automation, such as text within images, PDFs, or custom UI elements. Implement strict human-in-the-loop approval for all screen region OCR requests. Consider redacting sensitive areas or limiting OCR to specific, non-sensitive application regions. | LLM | scripts/read_region.py:40 | |
| HIGH | Reading Dialog Content Can Exfiltrate Sensitive Data The `handle_dialog.py` script's `read` functionality extracts all visible text and element information from system dialogs. These dialogs can often display sensitive information such as file paths, error details, user data, or parts of credentials, which could then be exfiltrated by the AI agent. Implement strict policies on what types of dialog content can be read and processed. Redact or filter out potentially sensitive information before it is returned to the AI agent. Require explicit user confirmation before reading dialogs that might contain confidential data. | LLM | scripts/handle_dialog.py:108 | |
| MEDIUM | Finding Text Locations Can Aid Data Exfiltration The `find_text.py` script locates and reports the coordinates of specific text within a window. While it doesn't directly exfiltrate content, knowing the precise location of sensitive keywords (e.g., 'password', 'SSN') can be used by an attacker to guide subsequent actions with other tools (like `screenshot.py` or `read_region.py`) to target and exfiltrate nearby sensitive data. Implement strict policies on what text can be searched for. Require explicit user confirmation if the search involves potentially sensitive keywords or if the found text is in a sensitive application. | LLM | scripts/find_text.py:30 | |
| MEDIUM | Reading UI Element Names Can Exfiltrate Contextual Data The `read_ui_elements.py` script extracts names and types of interactive UI elements from a window. While not full content extraction, element names can sometimes contain sensitive contextual information (e.g., 'Delete User 'admin'', 'Confirm Payment of $1000') that could be exfiltrated. Review and filter the names of UI elements returned to the AI agent, especially for sensitive applications. Implement user confirmation for actions based on potentially sensitive element names. | LLM | scripts/read_ui_elements.py:25 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/spliff7777/windows-control/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/89aa2c2732699e0c)
Powered by SkillShield