Trust Assessment
deep-research received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 8 findings: 1 critical, 6 high, 0 medium, and 1 low severity. Key findings include Sensitive path access: AI agent config, Untrusted input embedded directly into LLM-facing output, Arbitrary web content fetching and inclusion in report.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings8
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted input embedded directly into LLM-facing output The skill constructs a markdown report by directly embedding the user's initial query (`self.original_query`) and content (titles, URLs, and scraped text) fetched from external, untrusted websites. If this report is subsequently processed by an LLM, a malicious user or a compromised external website could inject instructions or manipulate the LLM's behavior by crafting specific text within the query, titles, URLs, or scraped content. Sanitize all untrusted strings (user query, fetched titles, URLs, and content) before embedding them into the final report. This could involve escaping markdown characters, using specific LLM-safe delimiters (e.g., XML tags, JSON blocks), or passing them as structured data rather than raw text to prevent interpretation as instructions by a downstream LLM. | LLM | deep_research.py:206 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/romancircus/privatedeepsearch-claw/skills/deep-research/SKILL.md:9 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/romancircus/privatedeepsearch-claw/skills/deep-research/SKILL.md:53 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/romancircus/privatedeepsearch-claw/skills/deep-research/SKILL.md:122 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/romancircus/privatedeepsearch-claw/skills/deep-research/SKILL.md:127 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/romancircus/privatedeepsearch-claw/skills/deep-research/SKILL.md:132 | |
| HIGH | Arbitrary web content fetching and inclusion in report The `fetch_content` function retrieves content from URLs provided by the SearXNG results. While SearXNG is configured to run locally, the URLs it returns can point to any external or internal network resource. If the skill is executed in an environment with access to internal network services, a malicious query could lead to the skill fetching sensitive data from these internal resources (e.g., `http://192.168.1.100/admin/config.txt`) and including it in the final report, thereby exfiltrating it. The `IGNORED_DOMAINS` list does not prevent fetching from internal IP addresses or other non-listed domains. Implement strict URL validation for `fetch_content` to ensure it only accesses public internet resources. This could involve whitelisting allowed domains/IP ranges or explicitly blocking private/reserved IP ranges (e.g., RFC1918 addresses) and `localhost` for fetched content, while still allowing `localhost` for SearXNG itself. | LLM | deep_research.py:100 | |
| LOW | Unpinned Python dependencies The `clawdbot` manifest specifies Python dependencies (`aiohttp`, `beautifulsoup4`) without pinning them to specific versions. This can lead to non-deterministic builds and potential security vulnerabilities if a new version of a dependency introduces a breaking change or a security flaw without explicit review. Pin Python dependencies to specific versions (e.g., `aiohttp==3.8.1`, `beautifulsoup4==4.11.1`) in the `clawdbot` manifest to ensure reproducible and secure builds. | LLM | SKILL.md |
Scan History
Embed Code
[](https://skillshield.io/report/d112cd7a55f7ab47)
Powered by SkillShield