Trust Assessment
ResearchMonitor received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User Input in Script Calls.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User Input in Script Calls The skill's workflow describes executing a Python script (`scripts/daily_briefing.py`) with user-provided input for arguments like `--add-topic`, `--check-seen`, and `--mark-seen`. If the host LLM constructs the shell command string by directly concatenating user input without proper escaping, a malicious user could inject arbitrary shell commands. For example, providing a topic like `'; rm -rf /'` could lead to critical system compromise if the LLM executes the command via `shell=True` or similar methods without sanitization. The LLM should execute external commands using a safe method that passes arguments as a list, rather than a single shell string. For example, using `subprocess.run(['python', 'scripts/daily_briefing.py', '--add-topic', user_input])` in Python, which prevents shell metacharacters in `user_input` from being interpreted as commands. If `shell=True` is strictly necessary, ensure all user-controlled input is rigorously escaped using functions like `shlex.quote()`. | LLM | SKILL.md:20 | |
| HIGH | Potential Command Injection via User Input in Script Calls The skill's workflow describes executing a Python script (`scripts/daily_briefing.py`) with user-provided input for arguments like `--add-topic`, `--check-seen`, and `--mark-seen`. If the host LLM constructs the shell command string by directly concatenating user input without proper escaping, a malicious user could inject arbitrary shell commands. For example, providing an 'ID' like `'; cat /etc/passwd'` could lead to data exfiltration if the LLM executes the command via `shell=True` or similar methods without sanitization. The LLM should execute external commands using a safe method that passes arguments as a list, rather than a single shell string. For example, using `subprocess.run(['python', 'scripts/daily_briefing.py', '--mark-seen', user_input])` in Python, which prevents shell metacharacters in `user_input` from being interpreted as commands. If `shell=True` is strictly necessary, ensure all user-controlled input is rigorously escaped using functions like `shlex.quote()`. | LLM | SKILL.md:45 |
Scan History
Embed Code
[](https://skillshield.io/report/f30ecb0712c531e2)
Powered by SkillShield