Trust Assessment
novel-writer received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 9 findings: 5 critical, 2 high, 2 medium, and 0 low severity. Key findings include Arbitrary command execution, Python file could not be statically analyzed, Skill requests broad 'shell:execute' and 'filesystem:write' permissions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 3/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings9
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/vanki-wang/novel-writer/run.py:21 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/vanki-wang/novel-writer/run.py:43 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/vanki-wang/novel-writer/run.py:44 | |
| CRITICAL | User-controlled 'ollama_model' in config can lead to command injection The `ollama_model` variable, loaded from the user-editable `~/.openclaw/novel_config.yaml`, is directly used as an argument in a `subprocess.run` call: `subprocess.run([“ollama”, “run”, model, full_prompt])`. A malicious user could modify `novel_config.yaml` to set `ollama_model` to a string containing shell commands (e.g., `qwen3:latest; rm -rf /`). Although `subprocess.run` with a list of arguments is generally safer against direct shell injection (as `shell=False` is default), the `shell:execute` permission means the skill *can* execute arbitrary commands. If `ollama` itself is a shell script or interprets its arguments in a way that allows command execution, or if the `model` string contains shell metacharacters and `shell=True` was used (even if not explicitly, the permission allows it), this becomes a direct command injection. Validate the `ollama_model` value from the config file to ensure it only contains valid model names and no shell metacharacters. Consider using a whitelist of allowed model names. If `ollama` is a custom script, ensure it does not use `shell=True` or interpret arguments as commands. | LLM | run.py:24 | |
| CRITICAL | User-controlled 'obsidian_vault' in config allows arbitrary file writes and potential data exfiltration The `obsidian_vault` variable, loaded from the user-editable `~/.openclaw/novel_config.yaml`, determines the base directory for writing new Markdown files. A malicious user could set `obsidian_vault` to a sensitive system directory (e.g., `/etc/`, `/root/`, `/usr/local/bin/`) and overwrite existing files or write new malicious scripts. This is directly enabled by the `filesystem:write` permission. Additionally, if the `vault` is set to a directory containing sensitive data, the subsequent `git add .` and `git commit` operations could stage and commit those sensitive files, leading to data exfiltration if the repository is public or accessible. Restrict the `obsidian_vault` path to a safe, non-sensitive directory, ideally within the user's home directory or a dedicated skill data directory. Implement robust path validation to prevent directory traversal attacks (e.g., `../`). | LLM | run.py:33 | |
| HIGH | Skill requests broad 'shell:execute' and 'filesystem:write' permissions The skill's manifest explicitly requests `shell:execute` and `filesystem:write` permissions. While these are necessary for its stated functionality (running `ollama`, `git`, and writing files), they significantly increase the attack surface. If combined with user-controlled inputs or configuration, these permissions can be leveraged for command injection and arbitrary file writes, posing a substantial security risk. Review if both `shell:execute` and `filesystem:write` are strictly necessary. If `shell:execute` is needed, ensure all `subprocess` calls use fixed commands and properly sanitized arguments, or consider using a more sandboxed execution environment. If `filesystem:write` is needed, restrict write operations to a dedicated, non-sensitive directory. | LLM | SKILL.md:10 | |
| HIGH | User-controlled 'obsidian_vault' in Git commands can lead to command injection via Git hooks The `vault` variable, loaded from `novel_config.yaml`, is used as the working directory for `git` commands (`git -C {vault} add .` and `git -C {vault} commit ...`). If a malicious user controls `novel_config.yaml`, they could set `vault` to a path that contains a malicious `.git/hooks` directory. When `git add` or `git commit` is executed in such a directory, the malicious Git hooks (e.g., `pre-commit`, `post-commit`) would be triggered, leading to arbitrary code execution. This is enabled by the `shell:execute` permission. Restrict the `obsidian_vault` path to a safe, non-sensitive directory. If possible, avoid running `git` commands in user-controlled directories or ensure that Git hooks are disabled or sanitized for such operations. | LLM | run.py:44 | |
| MEDIUM | Python file could not be statically analyzed SyntaxError: invalid character '“' (U+201C) (line 4) | Static | skills/vanki-wang/novel-writer/run.py:4 | |
| MEDIUM | User input 'prompt' is directly embedded into the downstream LLM prompt The user-provided `prompt` (from `sys.argv[2]`) is directly interpolated into the `full_prompt` string that is sent to the Ollama model. This allows a malicious user to inject instructions or adversarial prompts into the downstream LLM, potentially causing it to generate undesirable, harmful, or off-topic content, or to ignore its initial instructions. While this does not directly affect the host LLM or the skill's execution environment, it compromises the integrity and safety of the LLM's output. Implement prompt sanitization or use a templating system that separates user input from system instructions. Consider using a robust LLM guardrail or input validation to detect and mitigate prompt injection attempts before sending them to the model. | LLM | run.py:22 |
Scan History
Embed Code
[](https://skillshield.io/report/c5bff015c7e361ca)
Powered by SkillShield