Trust Assessment
ml-model-retrainer received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Missing required field: name, Arbitrary Code Execution via Unsafe Pickle Deserialization, Arbitrary File Read/Write via User-Controlled Paths.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Code Execution via Unsafe Pickle Deserialization The `load_model` method uses `pickle.load()` on a file path (`path`) that can be supplied as an argument. If an attacker can control this `path` argument (e.g., through a malicious prompt or external input), they can provide a path to a specially crafted malicious pickle file. Deserializing such a file can lead to arbitrary code execution on the system where the skill is running. This is a well-known and severe vulnerability in Python's `pickle` module. Avoid using `pickle` for deserializing data from untrusted sources. If `pickle` must be used, ensure the `path` argument is strictly controlled and validated to only allow access to trusted, skill-managed files within a sandboxed directory. Consider using safer serialization formats like JSON, Protocol Buffers, or joblib with `safe_load=True` for models. | LLM | SKILL.md:260 | |
| HIGH | Arbitrary File Read/Write via User-Controlled Paths The `load_model` and `save_model` methods accept a `path` argument which, if user-controlled, allows reading from or writing to arbitrary locations on the filesystem.
- `load_model(path)`: An attacker could specify any file path to read its contents, leading to data exfiltration.
- `save_model(path=...)`: An attacker could specify any file path to write arbitrary data (a pickled model) to it, potentially overwriting critical system files, placing malicious content, or causing a denial of service.
This constitutes excessive permissions as the skill can access files outside its intended scope. Restrict file operations to a designated, sandboxed directory. Validate and sanitize all user-provided file paths to ensure they do not contain directory traversal sequences (e.g., `../`) and only refer to files within the allowed `model_dir`. Do not allow `path` to be an absolute path or outside the `model_dir`. | LLM | SKILL.md:248 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/datadrivenconstruction/ml-model-retrainer/SKILL.md:1 | |
| MEDIUM | Unpinned Dependencies in `pip install` Command The `Dependencies` section specifies `pip install pandas numpy scikit-learn` without pinning specific versions. This can lead to non-deterministic builds and introduces a supply chain risk, as future installations might pull in newer versions with breaking changes, new vulnerabilities, or even malicious code if a package maintainer's account is compromised. Pin all dependencies to exact versions (e.g., `pandas==1.5.3`, `numpy==1.24.4`, `scikit-learn==1.2.2`). Use a `requirements.txt` file with pinned versions generated by `pip freeze > requirements.txt` or similar tools. Regularly audit and update dependencies. | LLM | SKILL.md:330 |
Scan History
Embed Code
[](https://skillshield.io/report/505221688e7c3d96)
Powered by SkillShield