Trust Assessment
deepresearch-conversation received a trust score of 61/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 2 medium, and 0 low severity. Key findings include Suspicious import: requests, Unsanitized `$task_id` in `curl` command for `FileParseQuery`, Potential data exfiltration via unsanitized `$local_file_path` in `FileUpload`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsanitized `$task_id` in `curl` command for `FileParseQuery` The `FileParseQuery` API example in `SKILL.md` uses `$task_id` directly in a `curl` command's URL. If `$task_id` originates from untrusted user input and contains shell metacharacters (e.g., `;`, `|`, `$(...)`), it could lead to arbitrary command execution on the host system where the `curl` command is run. Ensure `$task_id` is properly sanitized or quoted when constructing the shell command. For example, use `printf %q` in bash or pass parameters directly to `curl` if using a programming language wrapper, rather than relying on shell interpolation. | LLM | SKILL.md:104 | |
| HIGH | Potential data exfiltration via unsanitized `$local_file_path` in `FileUpload` The `FileUpload` API example in `SKILL.md` uses `file=@local_file_path`. If the `local_file_path` variable can be influenced by untrusted user input, a malicious user could specify paths to arbitrary files on the agent's filesystem (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `~/.openclaw/openclaw.json`). These files would then be uploaded to the Baidu API, leading to sensitive data exfiltration. This also represents excessive file system access if the agent is not strictly controlling the path. Strictly validate and sanitize `local_file_path` to ensure it points only to allowed, temporary, and non-sensitive files. Avoid allowing direct user control over file paths. Implement a file upload mechanism that uses a secure temporary directory and validates file types/contents. | LLM | SKILL.md:69 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/ide-rea/deepresearch-conversation/scripts/deepresearch_conversation.py:3 | |
| MEDIUM | User-controlled `query` parameter passed to remote LLM API The `deepresearch_conversation.py` script takes a JSON payload from `sys.argv[1]`, which includes a `query` field intended for the user's question. This `query` is then sent directly to the Baidu Deep Research API, which is described as an "in-depth research" task involving "multi-step reasoning and execution," implying an LLM backend. This creates a direct vector for prompt injection, allowing a malicious user to manipulate the behavior of the remote Baidu LLM. Other string fields like `title` and `description` within `structured_outline` could also be vectors. Implement robust input sanitization and validation for the `query` parameter and other user-controlled string fields before sending them to the LLM API. Consider using prompt templating, input filtering, or a separate LLM-based guardrail to detect and mitigate malicious prompts. | LLM | scripts/deepresearch_conversation.py:21 |
Scan History
Embed Code
[](https://skillshield.io/report/4d62e88906dc5c27)
Powered by SkillShield