Trust Assessment
desktop-control-1-0-0 received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 1 critical, 2 high, 2 medium, and 1 low severity. Key findings include Missing required field: name, Arbitrary Command Execution via _run_command, Data Exfiltration via Screenshots, Clipboard, and Command Output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 31/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Command Execution via _run_command The `DesktopController._run_command` method allows arbitrary shell command execution, including with `shell=True`. The `AIDesktopAgent._execute_step` method provides a direct action type (`'run_command'`) to invoke this. An attacker could craft a malicious prompt to the LLM, instructing it to generate a step that calls `run_command` with arbitrary commands. While `require_approval` is a safeguard, it can be bypassed if the `DesktopController` is initialized with `require_approval=False` or if the execution environment is non-interactive, allowing the LLM to execute system commands without user consent. Remove or strictly limit the `_run_command` functionality. If shell execution is absolutely necessary, ensure `shell=False` is always used, and validate/sanitize all command arguments rigorously. Implement robust input validation for any LLM-generated commands. Ensure `require_approval` cannot be bypassed or disabled by the LLM. | LLM | __init__.py:235 | |
| HIGH | Data Exfiltration via Screenshots, Clipboard, and Command Output The skill has capabilities to capture screenshots (`screenshot`), read clipboard content (`get_from_clipboard`), and execute arbitrary commands (`_run_command`) which can read files or environment variables. The `AIDesktopAgent._execute_step` method explicitly converts captured screenshots to base64 and returns them, creating a direct channel for exfiltrating visual data. The output of `_run_command` is also returned, allowing exfiltration of sensitive information obtained through command execution (e.g., `cat /etc/passwd`). Implement strict data handling policies. For screenshots, consider redacting sensitive areas or requiring explicit user approval for capture and transmission. For clipboard access, ensure it's only used when explicitly requested by the user for non-sensitive data. For command output, filter or redact potentially sensitive information before returning it to the LLM. | LLM | ai_agent.py:200 | |
| HIGH | Excessive Permissions: Direct Shell Access While desktop control inherently requires broad permissions (mouse, keyboard, screen), the inclusion of a general-purpose `_run_command` function that can execute arbitrary shell commands (especially with `shell=True`) grants excessive and dangerous permissions. This capability goes beyond typical desktop automation and significantly increases the attack surface, allowing the LLM to perform actions far outside its intended scope if exploited. Re-evaluate the necessity of direct shell command execution. If it's not critical for the core desktop automation functionality, remove it. If required, restrict it to a very limited set of pre-defined, safe commands with strictly controlled arguments, and ensure `shell=False` is always used. | LLM | __init__.py:235 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/wpegley/desktop-control-1-0-0/SKILL.md:1 | |
| MEDIUM | Potential Prompt Injection via LLM-driven Planning The `AIDesktopAgent._plan_task` method explicitly notes a 'TODO: Integrate with OpenClaw LLM for intelligent planning'. If this integration is implemented without robust input sanitization and output validation, a malicious prompt could trick the LLM into generating a plan that includes the dangerous `run_command` action type with arbitrary commands, leading to command injection. Even with the current rule-based planning, a crafted task description could potentially trigger unintended actions if the patterns are not sufficiently robust. When integrating LLM-driven planning, implement strict validation and sanitization of the LLM's output before executing any actions. Specifically, scrutinize any generated `run_command` actions and their parameters. Consider using a human-in-the-loop approval process for LLM-generated commands, or restrict the LLM to a predefined, safe set of actions. | LLM | ai_agent.py:139 | |
| LOW | Unpinned Dependencies in Installation Instructions The `SKILL.md` file provides `pip install` commands for dependencies (`pyautogui pillow opencv-python pygetwindow`) without specifying exact version numbers. This can lead to supply chain risks if a malicious version of a dependency is published under the same name but a higher version number, potentially introducing vulnerabilities or backdoors into the skill's environment upon installation. Pin all dependencies to specific, known-good versions (e.g., `pyautogui==0.9.59`). Regularly review and update dependency versions to incorporate security patches while maintaining version control. | LLM | SKILL.md:59 |
Scan History
Embed Code
[](https://skillshield.io/report/484a41a34fcfbd53)
Powered by SkillShield