Trust Assessment
doppel received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Unsanitized user input in shell command execution, Arbitrary file write via path traversal in output directory.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Unsanitized user input in shell command execution The skill explicitly instructs the execution of a Python script using a shell command: `python3 scripts/parse_chat.py <chat_export.txt> <target_name> <output_dir>`. The arguments `<chat_export.txt>`, `<target_name>`, and `<output_dir>` are user-provided and directly interpolated into the shell command. An attacker could inject arbitrary shell commands by providing malicious input for these arguments (e.g., `target_name="; rm -rf /"` or `output_dir="/tmp/foo; evil_command.sh"`), leading to remote code execution. The LLM must sanitize all user-provided arguments before constructing and executing the shell command. This typically involves escaping shell metacharacters or using a safer execution method that avoids shell interpretation (e.g., `subprocess.run` with `shell=False` and passing arguments as a list). | LLM | SKILL.md:55 | |
| HIGH | Arbitrary file write via path traversal in output directory The `scripts/parse_chat.py` script uses the user-provided `output_dir` argument directly in `os.makedirs(output_dir, exist_ok=True)` and `os.path.join(output_dir, 'parsed_messages.json')`. An attacker can supply a path traversal sequence (e.g., `../../../../tmp`) in `output_dir` to write `parsed_messages.json` and `target_messages.txt` to arbitrary locations on the file system. This could lead to overwriting critical system files, writing to sensitive directories, or achieving persistence. Sanitize the `output_dir` argument to prevent path traversal. This can be done by resolving the path to its absolute form and ensuring it remains within an allowed base directory using `os.path.abspath` and `os.path.commonprefix`, or by strictly validating the input to ensure it does not contain path separators or `..` sequences. | LLM | scripts/parse_chat.py:130 |
Scan History
Embed Code
[](https://skillshield.io/report/27a22b8bf0c7780a)
Powered by SkillShield