Security Audit
dkyazzentwatwa/chatgpt-skills:hash-calculator
github.com/dkyazzentwatwa/chatgpt-skillsTrust Assessment
dkyazzentwatwa/chatgpt-skills:hash-calculator received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 4 critical, 0 high, 0 medium, and 0 low severity. Key findings include Path Traversal in File Operations, Path Traversal in Directory Operations, Path Traversal in Checksum File Generation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 24, 2026 (commit d4bad335). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Path Traversal in File Operations The skill allows arbitrary file system access through user-controlled file paths, leading to potential data exfiltration, unauthorized file modification, or denial of service. Functions like `hash_file`, `hash_directory`, `generate_checksums`, and `verify_checksums` directly use user-provided paths without sufficient sanitization or validation to restrict access to an allowed directory. An attacker could provide paths like `../../../../etc/passwd` to read sensitive system files, or `../../../../tmp/malicious_file.txt` to write to arbitrary locations. Implement robust path validation and sanitization. Before using any user-provided `filepath`, `directory`, `output`, `checksum_file`, or `base_dir`, ensure that the resolved absolute path is strictly confined to an allowed base directory (e.g., a temporary directory or a user-specific data directory). Use `pathlib.Path.resolve(strict=True)` and then check if the resolved path starts with the allowed base directory. For `generate_checksums`, ensure the output file is written only to an authorized location. For `verify_checksums`, validate both the `checksum_file` path and the `filename` extracted from it. | LLM | scripts/hash_calc.py:64 | |
| CRITICAL | Path Traversal in Directory Operations The `hash_directory` function accepts a user-controlled `directory` argument which is used directly to construct `pathlib.Path` objects. This allows an attacker to specify arbitrary directories (e.g., `../../`) to list and hash files outside the intended scope, potentially leading to data exfiltration of sensitive files or information disclosure about the file system structure. Implement robust path validation and sanitization for the `directory` argument. Ensure that the resolved absolute path of the provided directory is strictly confined to an allowed base directory. Use `pathlib.Path.resolve(strict=True)` and then check if the resolved path starts with the allowed base directory. | LLM | scripts/hash_calc.py:130 | |
| CRITICAL | Path Traversal in Checksum File Generation The `generate_checksums` function takes a user-controlled `output` filepath. This allows an attacker to specify an arbitrary file path (e.g., `../../../../tmp/malicious_checksums.txt`) to write checksum data, potentially overwriting critical system files or writing to unauthorized locations, leading to data corruption or denial of service. Implement robust path validation and sanitization for the `output` argument. Ensure that the resolved absolute path of the output file is strictly confined to an allowed base directory (e.g., a temporary directory or a user-specific output directory). Use `pathlib.Path.resolve(strict=True)` and then check if the resolved path starts with the allowed base directory. | LLM | scripts/hash_calc.py:166 | |
| CRITICAL | Path Traversal in Checksum File Verification The `verify_checksums` function is vulnerable to path traversal in two ways: 1) The `checksum_file` argument is user-controlled, allowing an attacker to read arbitrary checksum files. 2) More critically, the `filename` extracted from the checksum file is used to construct `full_filepath = base_path / filename`. If an attacker crafts a malicious checksum file containing `filename` entries like `../../../../etc/passwd`, the skill will attempt to read and hash these arbitrary files, leading to severe data exfiltration. Implement robust path validation and sanitization for both the `checksum_file` argument and the `filename` extracted from the checksum file. Ensure that the resolved absolute paths for both are strictly confined to an allowed base directory. For `filename` entries, after constructing `full_filepath`, verify that `full_filepath.resolve()` is a child of the `base_path.resolve()` to prevent directory traversal attacks. | LLM | scripts/hash_calc.py:185 |
Scan History
Embed Code
[](https://skillshield.io/report/a92aa50eb9236931)
Powered by SkillShield