Trust Assessment
research received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Command Injection in `gemini` CLI call, Path Traversal Vulnerability in Output File Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection in `gemini` CLI call The skill instructs the sub-agent to execute `gemini --yolo "[RESEARCH PROMPT]"`. The `[RESEARCH PROMPT]` is derived directly from user-controlled input (`[FULL TOPIC WITH CONTEXT]`). This allows an attacker to inject arbitrary shell commands by including metacharacters (e.g., `"; rm -rf /"`) within their research topic, leading to remote code execution on the host system. The `--yolo` flag, which "auto-approves file operations," further increases the severity by bypassing potential interactive prompts or safety mechanisms. Implement robust input sanitization for `[RESEARCH PROMPT]` before embedding it into the shell command. Escape all shell metacharacters or, preferably, pass the research prompt as a separate argument to `gemini` if the CLI supports it. Use a safe subprocess execution method that avoids shell interpretation (e.g., `subprocess.run(['gemini', '--yolo', research_prompt], shell=False)` in Python) instead of direct string concatenation for shell commands. | LLM | SKILL.md:39 | |
| HIGH | Path Traversal Vulnerability in Output File Path The skill instructs the sub-agent to save research output to `~/clawd/research/[slug]/research.md`. The `[slug]` variable is derived from user-controlled input (the research topic). If `[slug]` is not properly sanitized, an attacker could inject path traversal sequences (e.g., `../`, `/`) to write files to arbitrary locations on the filesystem, outside the intended `~/clawd/research/` directory. This could lead to overwriting critical system files, writing to web server roots for remote code execution, or storing malicious content in unexpected locations. Ensure that the `[slug]` variable is strictly sanitized to contain only alphanumeric characters, hyphens, and underscores, and that no path separators are allowed. Implement a robust slug generation function that prevents path traversal. Alternatively, use a file system API that explicitly resolves paths safely within a confined directory. | LLM | SKILL.md:47 |
Scan History
Embed Code
[](https://skillshield.io/report/b17e014eb0fff346)
Powered by SkillShield