Trust Assessment
deep-research-pro received a trust score of 62/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 2 high, 1 medium, and 1 low severity. Key findings include Node lockfile missing, Prompt Injection via Sub-Agent Task, Command Injection in DDG Search Script Execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Sub-Agent Task The skill constructs a sub-agent's 'task' string by directly embedding user-controlled input such as '[TOPIC]', '[user's goal]', '[any specifics]', and '[slug]'. A malicious user could inject instructions into these fields, manipulating the sub-agent's behavior or causing it to deviate from its intended purpose. This is a direct prompt injection vector. Sanitize or strictly validate all user-provided input before embedding it into the sub-agent's 'task' string. Consider using a structured input format for sub-agent tasks instead of free-form string interpolation, or ensure the sub-agent's prompt is robust against adversarial instructions. | LLM | SKILL.md:100 | |
| HIGH | Command Injection in DDG Search Script Execution The skill explicitly instructs the execution of a shell command `/home/clawdbot/clawd/skills/ddg-search/scripts/ddg` with user-controlled input ('<sub-question keywords>' and '<topic>') directly embedded into the command string. If these inputs contain shell metacharacters (e.g., ';', '`', '$()', '||'), an attacker could execute arbitrary commands on the host system. Before executing shell commands, all user-provided input must be rigorously sanitized or escaped to prevent shell metacharacter interpretation. Alternatively, use a safer API that does not involve direct shell execution with unsanitized input, or pass arguments as distinct parameters to the script rather than embedding them in a single string. | LLM | SKILL.md:30 | |
| HIGH | Command Injection in Directory Creation The skill instructs the creation of a directory using `mkdir -p ~/clawd/research/[slug]`, where `[slug]` is derived from user-controlled input (the research topic). If `[slug]` is not properly sanitized, a malicious user could inject shell metacharacters, leading to arbitrary command execution (e.g., `mkdir -p ~/clawd/research/; rm -rf /; #`). Sanitize the `[slug]` variable to ensure it contains only safe characters suitable for a directory name (e.g., alphanumeric, hyphens, underscores) and does not contain any shell metacharacters. Implement strict validation before using it in a shell command. | LLM | SKILL.md:78 | |
| MEDIUM | Potential Data Exfiltration/SSRF via URL Fetching The skill uses `curl -sL "<url>"` to fetch content from URLs identified during the research process. While the URLs are typically external web pages, if the LLM is manipulated or if a malicious search result provides a crafted URL, this `curl` command could be used for Server-Side Request Forgery (SSRF) to access internal network resources or local files (e.g., `file:///etc/passwd`). It could also be used to exfiltrate data by crafting a URL that includes sensitive information in its query parameters. Implement strict URL validation to whitelist allowed schemes (e.g., http, https) and potentially domains. Ensure the environment where `curl` executes is sandboxed and cannot access internal networks or local file systems. Consider using a dedicated, secure web fetching utility that prevents SSRF and local file access. | LLM | SKILL.md:44 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/parags/deep-research-pro/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/79252343b29e2327)
Powered by SkillShield