Trust Assessment
defect-detection-ai received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Missing required field: name, Arbitrary file read via image_path, Arbitrary code execution via torch.load (Deserialization Vulnerability).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary code execution via torch.load (Deserialization Vulnerability) The `DefectDetectionModel` class uses `torch.load(model_path, ...)` to load a pre-trained model. `torch.load` internally uses Python's `pickle` module, which is known to be insecure against maliciously constructed data. If an attacker can control the `model_path` argument, they could provide a path to a specially crafted pickle file that, when deserialized, executes arbitrary code on the system. This constitutes a severe command injection vulnerability, allowing for full system compromise. Never load model files from untrusted sources. If `model_path` is user-controlled, ensure it points only to trusted, verified model files. Implement strong integrity checks (e.g., cryptographic hashes) for model files before loading them. Ideally, `model_path` should not be directly user-controlled but rather selected from a predefined set of trusted models. | LLM | SKILL.md:120 | |
| HIGH | Arbitrary file read via image_path The skill uses `PIL.Image.open()` and `cv2.imread()` to load images based on a provided `image_path`. If this `image_path` argument is user-controlled, an attacker could supply paths to arbitrary files on the system (e.g., `/etc/passwd`, `/app/secrets.txt`). The content of these files, if successfully read, could then be exfiltrated as part of the skill's return value or through error messages. This grants excessive read permissions beyond the intended scope of image processing. Implement strict input validation for `image_path` to ensure it points only to allowed image files within a designated, sandboxed directory. Avoid allowing arbitrary file paths. Consider using a file picker or content ID instead of direct paths. | LLM | SKILL.md:50 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/datadrivenconstruction/defect-detection-ai/SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/d17c665d29fa21a2)
Powered by SkillShield