Trust Assessment
parallel-ai-research received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Missing required field: name, Potential Command Injection in `parallel-research` arguments, Potential Command Injection in `export-pdf` arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Command Execution via Cron Job `payload.message` The skill defines a cron job where the `payload.message` field contains a string that the agent is expected to interpret and execute as a command (indicated by 'Run:'). While the example shows `parallel-research result <run_id>`, the mechanism allows for any arbitrary command string to be scheduled. If a malicious user can influence the content of this `message` (e.g., by manipulating the `<run_id>` or other variables that construct the message), they could schedule and execute arbitrary shell commands with the agent's permissions. This represents a critical command injection vulnerability. The `payload.message` field should not be used for arbitrary command execution. Instead, the cron job payload should specify a structured action or tool call with clearly defined and validated parameters, rather than a free-form command string. If a command must be executed, it should be a predefined, whitelisted command with all arguments strictly validated and sanitized. | LLM | SKILL.md:128 | |
| HIGH | Potential Command Injection in `parallel-research` arguments The skill instructs the agent to execute `parallel-research create "Your research question"`. If the 'Your research question' part is directly derived from unsanitized user input, a malicious user could inject shell commands (e.g., `"; rm -rf /"`) leading to arbitrary code execution. The skill description does not specify any sanitization for this input. Ensure all user-provided strings passed as arguments to shell commands are properly escaped or sanitized (e.g., using `shlex.quote` in Python or similar mechanisms in other languages) before execution. | LLM | SKILL.md:109 | |
| HIGH | Potential Command Injection in `export-pdf` arguments The skill instructs the agent to execute `export-pdf ~/.openclaw/workspace/research/<topic-slug>/research.md` and `export-pdf research.md ~/Desktop/output.pdf`. If `<topic-slug>` or the output path (`~/Desktop/output.pdf`) are derived from unsanitized user input, a malicious user could inject shell commands or perform path traversal (e.g., `export-pdf research.md "$(rm -rf /tmp/malicious_file; echo /tmp/output.pdf)"` or `export-pdf research.md "../../../etc/passwd"`). Ensure all user-provided strings used as file paths or arguments to shell commands are properly escaped, sanitized, and validated (e.g., restricting paths to allowed directories) before execution. | LLM | SKILL.md:147 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/brennerspear/parallel-ai-research/SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/84f05dcd3278d1ab)
Powered by SkillShield