Trust Assessment
win-mouse-native received a trust score of 80/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include LLM instructed to perform shell execution via `exec` from untrusted content, Skill grants direct control over user's desktop environment.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | LLM instructed to perform shell execution via `exec` from untrusted content The `SKILL.md` (which is treated as untrusted input) explicitly instructs the host LLM to use `exec` to run the `win-mouse` script. This is a direct instruction from untrusted content for the LLM to perform shell execution. While the `win-mouse.ps1` script itself implements robust argument validation (casting to `[int]` or whitelisting strings), the instruction for the LLM to use a powerful primitive like `exec` based on untrusted documentation introduces a significant risk. An attacker could potentially craft prompts that lead the LLM to misuse this `exec` capability to run arbitrary commands, bypassing the `win-mouse` script's internal protections, or by manipulating the arguments in a way the LLM doesn't anticipate, even if the script itself would error out. The fundamental risk is the LLM being told to execute shell commands by untrusted input. Avoid instructing the LLM to use raw `exec` directly from untrusted skill documentation. If shell execution is absolutely necessary, the LLM should call a pre-defined, sandboxed tool function that has strict argument validation and whitelisting, rather than constructing and executing arbitrary shell commands. Implement strict input validation and sanitization for all arguments passed to any `exec` call. Consider if a more constrained API or a sandboxed environment can replace direct shell execution. | LLM | SKILL.md:17 | |
| MEDIUM | Skill grants direct control over user's desktop environment The `win-mouse-native` skill, when executed, provides direct control over the Windows mouse cursor and allows for clicks, drags, and movements anywhere on the user's screen. This capability, while central to the skill's purpose, grants the AI agent significant power to interact with the user's desktop environment. If misused, either through a compromised LLM or malicious user input, this could lead to unintended actions such as clicking on sensitive UI elements, navigating to malicious websites, or triggering actions within applications that could result in data loss, unauthorized access, or system compromise. The `SKILL.md` instructs the LLM to use this capability based on user requests, but the interpretation of such requests by an LLM can be vulnerable to manipulation. Implement robust guardrails and explicit user confirmation mechanisms for any skill that performs direct system interaction or UI automation. Consider limiting the scope of the skill's execution environment (e.g., virtual desktop, restricted user account) to minimize potential damage. Educate users about the risks associated with granting AI agents direct control over their system. Ensure the LLM's decision-making process for using such powerful tools is highly constrained and auditable. | LLM | SKILL.md:17 |
Scan History
Embed Code
[](https://skillshield.io/report/351798bf9e46456d)
Powered by SkillShield