Trust Assessment
otaku-reco received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 2 critical, 0 high, 1 medium, and 0 low severity. Key findings include Suspicious import: urllib.request, Unsanitized user input in shell command execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Unsanitized user input in shell command execution The skill's workflow instructs the host LLM to construct shell commands by directly embedding user-provided input (`<番名>` or `<用户原话>`) into the command string. This pattern is highly vulnerable to command injection, where a malicious user could provide input containing shell metacharacters (e.g., `'; rm -rf /'`) to execute arbitrary commands on the underlying system. The `reco_cli.py` script itself does not sanitize these arguments, relying on the LLM's runtime to handle this securely. The host LLM runtime must properly escape or quote user-provided arguments before passing them to the shell. If shell execution is necessary, ensure the LLM uses a safe execution mechanism (e.g., `subprocess.run` with `shell=False` and arguments passed as a list) rather than direct string interpolation into a shell command. Alternatively, the skill could be redesigned to pass arguments via a more structured and secure method. | LLM | SKILL.md:14 | |
| CRITICAL | Unsanitized user input in shell command execution The skill's workflow instructs the host LLM to construct shell commands by directly embedding user-provided input (`<番名>` or `<用户原话>`) into the command string. This pattern is highly vulnerable to command injection, where a malicious user could provide input containing shell metacharacters (e.g., `'; rm -rf /'`) to execute arbitrary commands on the underlying system. The `reco_cli.py` script itself does not sanitize these arguments, relying on the LLM's runtime to handle this securely. The host LLM runtime must properly escape or quote user-provided arguments before passing them to the shell. If shell execution is necessary, ensure the LLM uses a safe execution mechanism (e.g., `subprocess.run` with `shell=False` and arguments passed as a list) rather than direct string interpolation into a shell command. Alternatively, the skill could be redesigned to pass arguments via a more structured and secure method. | LLM | SKILL.md:17 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/robin797860/otaku-reco/reco_cli.py:16 |
Scan History
Embed Code
[](https://skillshield.io/report/165528bdbec8eaf0)
Powered by SkillShield