Trust Assessment
ml-model-builder received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 2 critical, 0 high, 1 medium, and 0 low severity. Key findings include Missing required field: name, Arbitrary File Write via User-Controlled Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary File Write via User-Controlled Path The `export_model` method allows writing arbitrary JSON content to any file path specified by the `output_path` parameter. An attacker could exploit this to overwrite critical system files, inject malicious configuration, or cause a denial of service by filling up disk space in sensitive locations. The content written is the model's internal data, which is not directly malicious code, but the ability to write to any location is a severe vulnerability. Restrict the `output_path` parameter to a safe, designated directory (e.g., a temporary directory or a user-specific output directory). Sanitize the path to prevent directory traversal attacks (e.g., `../`). Consider using a UUID for filenames to prevent collisions and ensure uniqueness. | LLM | SKILL.md:306 | |
| CRITICAL | Arbitrary File Write via User-Controlled Path The `export_to_excel` method allows writing an Excel file to any file path specified by the `output_path` parameter. Similar to `export_model`, an attacker could exploit this to overwrite critical system files, inject malicious configuration, or cause a denial of service by filling up disk space in sensitive locations. The content written is a summary of models, but the ability to write to arbitrary locations is a severe vulnerability. Restrict the `output_path` parameter to a safe, designated directory (e.g., a temporary directory or a user-specific output directory). Sanitize the path to prevent directory traversal attacks (e.g., `../`). Consider using a UUID for filenames to prevent collisions and ensure uniqueness. | LLM | SKILL.md:345 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/datadrivenconstruction/ml-model-builder/SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/7864446e226d2313)
Powered by SkillShield