Trust Assessment
vizclaw received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 2 high, 2 medium, and 0 low severity. Key findings include Suspicious import: urllib.request, Remote Script Execution via Unpinned URL, Data Exfiltration via Configurable Remote Endpoints and Arbitrary File/Env Var Reading.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Remote Script Execution via Unpinned URL The skill's documentation instructs users to execute a Python script directly from a remote URL (`https://vizclaw.com/skills/vizclaw/scripts/connect.py`) using `uv run`. This practice introduces a critical supply chain risk. If the remote server (vizclaw.com) is compromised, the script could be replaced with malicious code, leading to arbitrary code execution on the user's machine without any prior review or warning. There is no pinning to a specific version or hash of the script, making it highly vulnerable to changes on the remote server. Avoid direct execution of remote scripts. Instead, package the script within the skill or provide instructions for users to download and review the script locally before execution. If remote execution is unavoidable, implement strong integrity checks (e.g., cryptographic hash verification) to ensure the script has not been tampered with. Pin to a specific version or commit hash. | LLM | SKILL.md:15 | |
| HIGH | Data Exfiltration via Configurable Remote Endpoints and Arbitrary File/Env Var Reading The `connect.py` script is designed to stream various events, configurations, and potentially file contents to remote API and WebSocket endpoints. These endpoints (`api_url`, `openclaw_gateway_url`, `openclaw_ws_url`) are highly configurable via command-line arguments and environment variables. The script also reads sensitive environment variables (e.g., `VIZCLAW_API_KEY`, `VIZCLAW_OPENCLAW_GATEWAY_TOKEN`) and can read arbitrary local files (e.g., via `--openclaw-log-tail`, `--openclaw-jsonl-file`, `--openclaw-jsonl-dir` arguments or corresponding environment variables). If an attacker can manipulate these configuration parameters (e.g., through prompt injection to the LLM, or by setting malicious environment variables), they could redirect all streamed data, including sensitive file contents and credentials, to an attacker-controlled server. Implement strict validation and sanitization for all user-provided URLs and file paths. Restrict file access to specific, non-sensitive directories. For API keys and tokens, ensure they are only sent to trusted, hardcoded domains or domains explicitly whitelisted by the user. Warn users about the risks of providing untrusted URLs or file paths. Consider using a proxy or a secure vault for credentials. | LLM | scripts/connect.py:210 | |
| HIGH | Credential Harvesting via Configurable Remote Endpoints The script reads `VIZCLAW_API_KEY` and `VIZCLAW_OPENCLAW_GATEWAY_TOKEN` from environment variables and includes them in HTTP/WebSocket headers when communicating with `api_url` and `openclaw_gateway_url`. While this is a standard method for API authentication, the high configurability of these destination URLs (via command-line arguments and environment variables) creates a significant risk. If an attacker can trick the LLM or user into setting a malicious `api_url` or `openclaw_gateway_url`, these credentials will be sent directly to the attacker's server, leading to credential harvesting. As part of the broader data exfiltration remediation, ensure that API keys and tokens are only transmitted to trusted, hardcoded domains or domains explicitly whitelisted by the user. Implement robust input validation for all URL parameters. Consider using more secure authentication mechanisms where possible, or at least provide clear warnings about the risks of configuring untrusted endpoints. | LLM | scripts/connect.py:211 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/araa47/vizclaw/scripts/connect.py:56 | |
| MEDIUM | Excessive Permissions: Arbitrary File Read The script allows reading arbitrary local files and directories via command-line arguments (`--openclaw-log-tail`, `--openclaw-jsonl-file`, `--openclaw-jsonl-dir`) and corresponding environment variables. There are no apparent path sanitization or restrictions to specific directories. An LLM, if prompted maliciously, could instruct the skill to read sensitive system files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `~/.aws/credentials`) and then exfiltrate their contents to a controlled remote endpoint (as described in the Data Exfiltration finding). Implement strict path validation and sanitization. Restrict file reading to specific, designated log directories or user-approved paths. Avoid allowing arbitrary file paths from untrusted input. If reading from arbitrary paths is essential, implement strong warnings and require explicit user confirmation for sensitive paths. | LLM | scripts/connect.py:218 |
Scan History
Embed Code
[](https://skillshield.io/report/0208571bffc09bee)
Powered by SkillShield