Trust Assessment
risk-assessment-ml received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Missing required field: name, Untrusted Deserialization (joblib.load), Uncontrolled File Write Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted Deserialization (joblib.load) The `load_models` method uses `joblib.load` with a `path` argument. If this `path` can be controlled by an untrusted user, it allows for arbitrary code execution through deserialization of a malicious pickle file. `joblib.load` is not safe against maliciously constructed data, making this a critical vulnerability if the input path is not strictly controlled. Ensure that the `path` argument for `load_models` is never user-controlled or derived from untrusted input. Models should only be loaded from trusted, internal sources. If user-provided models are necessary, implement strict validation or use safer serialization formats (e.g., ONNX, PMML) that do not allow arbitrary code execution. | LLM | SKILL.md:262 | |
| HIGH | Uncontrolled File Write Path The `generate_risk_report` function accepts an `output_path` argument without apparent validation. If this path is user-controlled, an attacker could specify arbitrary file paths, leading to path traversal, overwriting critical files, or writing to sensitive directories. This could result in data loss, denial of service, or privilege escalation depending on the execution environment's permissions. Implement strict validation and sanitization for the `output_path` argument. Restrict output to a designated, sandboxed directory. Consider using a UUID or similar for filenames to prevent overwrites, or ensure the path is relative to a secure base directory and does not contain path traversal sequences (e.g., '..'). | LLM | SKILL.md:298 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/datadrivenconstruction/risk-assessment-ml/SKILL.md:1 | |
| INFO | Hardcoded Input File Path The skill attempts to read a hardcoded file 'project_history.csv'. This assumes the file exists in the execution environment's working directory. If the skill is deployed in an environment where this file is not present or is expected to be user-provided, it could lead to runtime errors or unexpected behavior. If the execution environment has broad file system access, this could also inadvertently read a sensitive file if one is named 'project_history.csv' in an accessible location. Parameterize the input file path to allow for flexible data sources, or clearly document the expectation for 'project_history.csv' to be present in the skill's execution context. Ensure the execution environment is sandboxed to prevent unintended file access. | LLM | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/2f90cdbc0b465703)
Powered by SkillShield