Trust Assessment
ids-checker received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Missing required field: name, Arbitrary File Write via `output_path`, Potential Indirect Prompt Injection via Unsanitized Output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary File Write via `output_path` The `export_to_excel` method allows writing data to an arbitrary file path specified by the `output_path` parameter. An attacker controlling this parameter could write sensitive validation results (potentially containing BIM data) to an accessible location, leading to data exfiltration. Furthermore, if the skill runs with elevated privileges, this could be exploited to overwrite critical system files, leading to denial of service or further compromise. Restrict file write operations to a designated, sandboxed output directory. Do not allow arbitrary file paths. If writing to user-specified paths is absolutely necessary, implement strict path validation and sanitization, and ensure the skill runs with minimal necessary file system permissions. | LLM | SKILL.md:308 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/datadrivenconstruction/ids-checker/SKILL.md:1 | |
| MEDIUM | Potential Indirect Prompt Injection via Unsanitized Output The `ValidationResult` objects, particularly their `message` and `details` fields, are constructed using f-strings that incorporate user-controlled input (e.g., property names, values, patterns). These results are then returned by `get_failed_checks` and written to an Excel file. If these outputs are subsequently fed back into an LLM without proper sanitization, an attacker could craft malicious input to inject instructions into the LLM's context, potentially leading to unintended actions or information disclosure. Sanitize all user-controlled data before incorporating it into messages or outputs that might be consumed by an LLM. Consider encoding or escaping special characters that could be interpreted as instructions by an LLM. Implement strict input validation on all parameters that contribute to these messages. | LLM | SKILL.md:204 |
Scan History
Embed Code
[](https://skillshield.io/report/93d1548bd2832cb9)
Powered by SkillShield