Trust Assessment
hugging-face-model-trainer received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 14 findings: 7 critical, 6 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, Dangerous call: subprocess.run(), Suspicious import: urllib.request.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, static_code_analysis, dependency_graph. The manifest_analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 11, 2026 (commit 3f4f55d6). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings14
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-3yot3vd1/repo/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py:55 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-3yot3vd1/repo/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py:63 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-3yot3vd1/repo/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py:64 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-3yot3vd1/repo/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py:81 | |
| CRITICAL | Command Injection via User-Controlled Script Content in hf_jobs() The skill explicitly instructs the LLM to construct a Python script from user input and pass it directly as a string to the `hf_jobs()` MCP tool's `script` parameter. If the user provides malicious Python code, the LLM is instructed to embed and execute this code within the Hugging Face Jobs environment, leading to arbitrary command execution. The LLM must rigorously sanitize and validate any user-provided input before embedding it into the Python script string passed to `hf_jobs()`. Ideally, user input should only modify predefined parameters or select from a whitelist of options, rather than directly contributing to executable code. Consider using a templating engine with strict escaping or a more structured input method for script generation. | Unknown | SKILL.md:80 | |
| CRITICAL | Arbitrary Code Execution via `trust_remote_code=True` with User-Controlled Model ID The `scripts/convert_to_gguf.py` script loads models using `AutoModelForCausalLM.from_pretrained` and `AutoTokenizer.from_pretrained` with `trust_remote_code=True`. The `BASE_MODEL` and `ADAPTER_MODEL` are sourced from environment variables (`os.environ.get`). If an attacker can control these environment variables (e.g., by manipulating the `secrets` parameter in `hf_jobs()`), they can specify a malicious model ID from the Hugging Face Hub. Loading such a model with `trust_remote_code=True` allows arbitrary Python code embedded in the model's configuration to be executed within the job environment. Avoid using `trust_remote_code=True` with user-controlled model IDs. If `trust_remote_code=True` is strictly necessary, ensure that `BASE_MODEL` and `ADAPTER_MODEL` are validated against a whitelist of trusted model IDs or that the environment variables are not directly controllable by untrusted input. Implement strict input validation for any parameters that influence model loading. | Unknown | scripts/convert_to_gguf.py:182 | |
| CRITICAL | Arbitrary Code Execution via `trust_remote_code=True` with User-Controlled Base Model in Unsloth Script The `scripts/unsloth_sft_example.py` script uses `unsloth.FastLanguageModel.from_pretrained` to load the base model, where the model ID is provided via the `--base-model` command-line argument. If `trust_remote_code=True` is implicitly or explicitly enabled by Unsloth or the underlying `transformers` library, and an attacker can control the `--base-model` argument (e.g., via `script_args` in `hf_jobs()`), they can specify a malicious model ID from the Hugging Face Hub. Loading such a model allows arbitrary Python code embedded in the model's configuration to be executed within the job environment. Avoid using `trust_remote_code=True` with user-controlled model IDs. If `trust_remote_code=True` is strictly necessary, ensure that the `--base-model` argument is validated against a whitelist of trusted model IDs or that it is not directly controllable by untrusted input. Implement strict input validation for any parameters that influence model loading. | Unknown | scripts/unsloth_sft_example.py:134 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run_command'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-3yot3vd1/repo/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py:81 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'check_system_dependencies'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-3yot3vd1/repo/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py:55 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'check_system_dependencies'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-3yot3vd1/repo/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py:63 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'check_system_dependencies'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-3yot3vd1/repo/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py:64 | |
| HIGH | Data Exfiltration via User-Controlled `OUTPUT_REPO` in GGUF Conversion The `scripts/convert_to_gguf.py` script uploads the converted GGUF model to a Hugging Face Hub repository specified by the `OUTPUT_REPO` environment variable. If an attacker can control the value of `OUTPUT_REPO` (e.g., via the `secrets` parameter in `hf_jobs()`), they can direct the script to upload the fine-tuned model to their own repository, leading to data exfiltration. Ensure that the `OUTPUT_REPO` environment variable is not directly controllable by untrusted input. Validate `OUTPUT_REPO` against a whitelist of allowed repositories or enforce that it must belong to the user's namespace. The LLM should ensure that the `hub_model_id` (and thus `OUTPUT_REPO`) is always set to a repository owned by the legitimate user. | Unknown | scripts/convert_to_gguf.py:161 | |
| HIGH | Data Exfiltration via User-Controlled `output-repo` in Unsloth Script The `scripts/unsloth_sft_example.py` script pushes the fine-tuned model to a Hugging Face Hub repository specified by the `--output-repo` command-line argument. If an attacker can control the value of `--output-repo` (e.g., via `script_args` in `hf_jobs()`), they can direct the script to upload the fine-tuned model to their own repository, leading to data exfiltration. Ensure that the `--output-repo` argument is not directly controllable by untrusted input. Validate `output-repo` against a whitelist of allowed repositories or enforce that it must belong to the user's namespace. The LLM should ensure that the `output-repo` is always set to a repository owned by the legitimate user. | Unknown | scripts/unsloth_sft_example.py:146 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-3yot3vd1/repo/skills/hugging-face-model-trainer/scripts/dataset_inspector.py:24 |
Scan History
Embed Code
[](https://skillshield.io/report/a1375561175599ce)
Powered by SkillShield