Trust Assessment
lygo-mint-operator-suite received a trust score of 23/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 7 findings: 1 critical, 1 high, 5 medium, and 0 low severity. Key findings include Arbitrary command execution, Unsafe deserialization / dynamic eval, Dangerous call: subprocess.run().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/deepseekoracle/lygo-mint-operator-suite/scripts/mint_pack_local.py:36 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run_py'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/deepseekoracle/lygo-mint-operator-suite/scripts/mint_pack_local.py:36 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/deepseekoracle/lygo-mint-operator-suite/scripts/verify_pack_v2.py:7 | |
| MEDIUM | Arbitrary file read and write via user-controlled paths The `bundle_pack_v2.py` script accepts `--input` and `--out` arguments which are used to specify arbitrary file system paths. The script reads all files from the `--input` directory and writes a zip archive to the `--out` path. An attacker could potentially use this to read sensitive files from the system (e.g., `/etc/passwd`, `~/.ssh/id_rsa`) and bundle them, or overwrite critical system files by specifying a malicious `--out` path, assuming the agent has the necessary file system permissions. Restrict the `--input` and `--out` paths to a designated safe directory (e.g., a sandbox or skill-specific data directory) using path validation or by prepending a secure base path. Alternatively, implement an allowlist for file extensions or specific directories. | LLM | scripts/bundle_pack_v2.py:23 | |
| MEDIUM | Arbitrary file read via user-controlled input path The `mint_pack_v2.py` script accepts an `--input` argument which specifies a file or folder path. The script then reads the contents of all files within this path to generate hashes and a manifest. An attacker could provide a path to sensitive system files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`) to exfiltrate their content by having the agent process them and potentially include their hashes or even canonicalized content in the generated manifest or ledger, which might then be printed or stored. While the script writes outputs to `STATE_DIR` and `OUT_DIR` within the workspace, the act of reading arbitrary files is a concern. Restrict the `--input` path to a designated safe directory (e.g., a sandbox or skill-specific data directory) using path validation or by prepending a secure base path. | LLM | scripts/mint_pack_v2.py:99 | |
| MEDIUM | Arbitrary file read via user-controlled input path Similar to `mint_pack_v2.py`, the `verify_pack_v2.py` script accepts an `--input` argument for a file or folder path. It reads the contents of files within this path to re-calculate hashes for verification. An attacker could use this to read sensitive files from the system by providing a malicious `--input` path. Although the script's primary output is a verification status, the underlying file reading capability poses a risk. Restrict the `--input` path to a designated safe directory (e.g., a sandbox or skill-specific data directory) using path validation or by prepending a secure base path. | LLM | scripts/verify_pack_v2.py:30 | |
| MEDIUM | Arbitrary file read via user-controlled path passed to subprocess The `mint_pack_local.py` script acts as a wrapper, calling `mint_pack.py` (likely `mint_pack_v2.py` equivalent) via `subprocess.run()`. The `--pack` argument, which is user-controlled, is passed directly to the subprocess. The underlying `mint_pack.py` script then reads files from this user-specified path. This allows an attacker to instruct the agent to read arbitrary files from the file system by providing a malicious `--pack` argument. Restrict the `--pack` path to a designated safe directory (e.g., a sandbox or skill-specific data directory) using path validation or by prepending a secure base path before passing it to the subprocess. | LLM | scripts/mint_pack_local.py:44 |
Scan History
Embed Code
[](https://skillshield.io/report/d3997ba9ce7a93a4)
Powered by SkillShield