Trust Assessment
bim-validation-report received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Missing required field: name, Arbitrary Code Execution via Dynamically Executed Callable, Excessive Permissions: Arbitrary File Write via User-Controlled Output Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Code Execution via Dynamically Executed Callable The `BIMValidationEngine.add_custom_rule` method allows adding new validation rules with a user-defined `check_function: Callable`. This callable is later executed by the `validate_element` method. If the `check_function` is generated by the LLM based on untrusted user input (e.g., a prompt asking for a rule that performs a specific action), it can lead to arbitrary code execution within the skill's environment. An attacker could craft input that, when interpreted by the LLM to generate the `check_function`, results in malicious Python code (e.g., `lambda e: __import__('os').system('rm -rf /')`). Implement strict sanitization or an allow-list for dynamically generated `check_function` logic. Avoid generating executable code from untrusted input. If custom rules are necessary, consider a sandboxed execution environment or a declarative rule definition language that is parsed and executed safely, rather than directly executing arbitrary Python callables. | LLM | SKILL.md:220 | |
| HIGH | Excessive Permissions: Arbitrary File Write via User-Controlled Output Path The `generate_validation_report` function accepts an `output_path` argument, which is directly used by `BIMValidationEngine.export_report` to create an Excel file using `pd.ExcelWriter(output_path, ...)`. If an attacker can control the `output_path` argument, they can specify an arbitrary file path on the system, potentially overwriting critical system files, writing sensitive data to an attacker-controlled location, or causing a denial of service. This grants the skill excessive write permissions beyond its intended scope. Implement strict validation and sanitization for the `output_path` to ensure it falls within an allowed, sandboxed directory. Prevent path traversal (e.g., `../`) and restrict writing to only designated output directories. Consider returning the report data directly rather than writing to a file, or requiring explicit user confirmation for file writes to specific locations. | LLM | SKILL.md:299 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/datadrivenconstruction/bim-validation-report/SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/d65fe65e0101d463)
Powered by SkillShield