Trust Assessment
boggle received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 1 high, 3 medium, and 0 low severity. Key findings include Suspicious import: urllib.request, Arbitrary file read via custom dictionary path, Potential Denial of Service from malicious dictionary download.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary file read via custom dictionary path The `solve.py` script allows users to specify arbitrary file paths for custom dictionaries using the `--dict` command-line argument. The `load_dictionaries` function then opens and reads the content of these files. An attacker could provide a path to a sensitive system file (e.g., `/etc/passwd`, `/app/secrets.txt`, `/proc/self/environ`) or other local files. The skill would read these files, treating their lines as potential Boggle words. While the skill doesn't explicitly exfiltrate the entire file content, the LLM processing these 'words' could inadvertently expose sensitive information in its output or internal logs, leading to data exfiltration. 1. Restrict `--dict` paths: Limit custom dictionary paths to a specific, non-sensitive directory (e.g., a `data/custom_dicts` subdirectory within the skill's own directory) and prevent directory traversal (`..`). 2. Sanitize input: Ensure that the provided dictionary paths do not contain directory traversal sequences (`..`) or absolute paths outside the allowed directory. 3. Remove feature if not essential: If custom dictionaries are not a core requirement for the LLM agent's use case, consider removing the `--dict` argument entirely. 4. Isolate execution: Run the skill in a sandboxed environment with minimal file system access. | LLM | scripts/solve.py:105 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/christianhaberl/boggle/data/download.py:3 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/christianhaberl/boggle/scripts/solve.py:36 | |
| MEDIUM | Potential Denial of Service from malicious dictionary download The skill downloads dictionary files from a hardcoded GitHub repository (`https://raw.githubusercontent.com/christianhaberl/boggle-openclaw-skill/main/data`). While the URL is fixed, if the upstream repository is compromised, an attacker could replace the legitimate dictionary files with malicious ones. Although the skill only reads these files as word lists and does not execute them, a specially crafted dictionary (e.g., containing an extremely large number of words, very long words, or words designed to trigger inefficient trie construction) could lead to excessive memory consumption or CPU usage, resulting in a Denial of Service (DoS) for the skill or the host system. 1. Integrity verification: Implement checksum verification (e.g., SHA256) for downloaded dictionary files to ensure their integrity and authenticity. The checksums should be hardcoded or fetched from a trusted, separate source. 2. Resource limits: Apply resource limits (memory, CPU time) to the skill's execution environment to mitigate DoS attacks. 3. Regular review: Regularly review the upstream repository for any signs of compromise. | LLM | scripts/solve.py:50 |
Scan History
Embed Code
[](https://skillshield.io/report/4b9201c95a87b765)
Powered by SkillShield