Trust Assessment
mulerouter received a trust score of 41/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 0 critical, 3 high, 3 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Unpinned Python dependency version, Arbitrary Local File Read and Exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/misaka43fd/mulerouter-skills/models/base.py:32 | |
| HIGH | Arbitrary Local File Read and Exfiltration The skill allows reading arbitrary local files from the agent's filesystem and exfiltrating their base64-encoded content to the configured API endpoint. The `file_to_base64` function, triggered by image parameters like `--image`, opens and reads any file specified by a local path, provided the agent process has read permissions. This could lead to sensitive data (e.g., configuration files, SSH keys, other credentials) being read and sent to the remote API. Implement strict path validation and sandboxing for local file inputs. Restrict file access to a dedicated, non-sensitive directory (e.g., a temporary upload folder) or require explicit user confirmation for reading files outside of expected image directories. Avoid reading arbitrary file paths provided by untrusted input. | LLM | core/image.py:46 | |
| HIGH | API Key Exfiltration via Malicious Base URL The skill's CLI interface allows overriding the API base URL via the `--base-url` argument. If an attacker can inject this argument into the command executed by the LLM or trick a user into running it, the `MULEROUTER_API_KEY` (or any other API key configured) would be sent to an attacker-controlled server. The `APIClient` uses the resolved `base_url` to send requests, including the `Authorization` header containing the API key. Implement a strict whitelist for allowed `base_url` values. If custom base URLs are necessary, require explicit user confirmation when the `base_url` is changed from a trusted default, or ensure that the API key is only sent to whitelisted domains. | LLM | core/config.py:78 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/misaka43fd/mulerouter-skills/models/__init__.py:3 | |
| MEDIUM | Unpinned Python dependency version Dependency 'httpx>=0.27.0' is not pinned to an exact version. Pin Python dependencies with exact versions where feasible. | Dependencies | skills/misaka43fd/mulerouter-skills/pyproject.toml | |
| MEDIUM | Excessive File System Read Permissions The skill's `core/image.py` module grants excessive file system read permissions by allowing the `--image` parameter to accept any local file path. While it checks for file existence, it does not restrict the directories or types of files that can be read. This broad access increases the attack surface, as an attacker could attempt to read sensitive system files or user data if the agent process has the necessary permissions. Limit the scope of local file access to a dedicated, non-sensitive directory. For example, only allow files within a temporary upload directory or a user-specified, sandboxed location. Implement more granular checks on file types or content if possible. | LLM | core/image.py:37 |
Scan History
Embed Code
[](https://skillshield.io/report/db79c23de41581df)
Powered by SkillShield