Trust Assessment
gedcom-explorer received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 1 high, 3 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Client-Side HTML/JS Injection via Title/Subtitle, Arbitrary File Read/Write via Command Line Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Client-Side HTML/JS Injection via Title/Subtitle The skill's `build_explorer.py` script directly embeds user-provided `--title` and `--subtitle` arguments into the generated HTML file without proper HTML escaping. An attacker, or a malicious LLM prompt, could inject arbitrary HTML or JavaScript into these fields, leading to a Cross-Site Scripting (XSS) vulnerability in the output HTML. When a user opens the generated HTML file, the injected script could execute, potentially leading to data exfiltration from the user's browser, session hijacking, or defacement. HTML-escape the `TITLE` and `SUBTITLE` variables before inserting them into the HTML template. For example, use a function like `html.escape()` from Python's `html` module or a similar robust escaping mechanism to convert special characters (&, <, >, ", ') into their corresponding HTML entities. | LLM | scripts/build_explorer.py:400 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/justinhartbiz/gedcom-explorer/scripts/build_explorer.py:1669 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/justinhartbiz/gedcom-explorer/scripts/build_explorer.py:2017 | |
| MEDIUM | Arbitrary File Read/Write via Command Line Arguments The `build_explorer.py` script accepts arbitrary file paths for both the input GEDCOM file (`<input.ged>`) and the output HTML file (`[output.html]`) directly from command-line arguments (`sys.argv`). While this is core functionality, it means the skill, if invoked by an LLM with untrusted or malicious paths, could be coerced into reading sensitive files from the system (though it would attempt to parse them as GEDCOM, likely failing for non-GEDCOM files) or overwriting arbitrary files on the filesystem. This grants the skill broader file system access than strictly necessary if not properly constrained by the calling environment. Implement stricter validation for file paths, potentially restricting them to a designated, sandboxed directory for input and output. The LLM orchestrator should also enforce strict path validation and sandboxing when invoking skills that handle file system operations to prevent access to sensitive system files or arbitrary writes. | LLM | scripts/build_explorer.py:419 |
Scan History
Embed Code
[](https://skillshield.io/report/eae93be7a5d973d7)
Powered by SkillShield