Trust Assessment
jellyseerr received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 19 findings: 3 critical, 6 high, 10 medium, and 0 low severity. Key findings include Persistence / self-modification instructions, Unsafe deserialization / dynamic eval, Dangerous call: __import__().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings19
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Persistence / self-modification instructions Crontab manipulation (list/remove/edit) Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/ericrosenberg/jellyseerr/SKILL.md:72 | |
| CRITICAL | Persistence / self-modification instructions systemd service persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/ericrosenberg/jellyseerr/scripts/install_service.sh:35 | |
| CRITICAL | Persistence / self-modification instructions systemd service persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/ericrosenberg/jellyseerr/scripts/setup_webhook.sh:30 | |
| HIGH | Unsafe deserialization / dynamic eval Python builtins/import manipulation Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/ericrosenberg/jellyseerr/scripts/monitor_availability.py:77 | |
| HIGH | Dangerous call: __import__() Call to '__import__()' detected in function 'send_notification'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/ericrosenberg/jellyseerr/scripts/monitor_availability.py:77 | |
| HIGH | Persistence mechanism: systemd service Detected systemd service pattern. Persistence mechanisms allow malware to survive system restarts. Review this persistence pattern. Skills should not modify system startup configuration. | Static | skills/ericrosenberg/jellyseerr/scripts/install_service.sh:35 | |
| HIGH | Persistence mechanism: systemd service Detected systemd service pattern. Persistence mechanisms allow malware to survive system restarts. Review this persistence pattern. Skills should not modify system startup configuration. | Static | skills/ericrosenberg/jellyseerr/scripts/setup_webhook.sh:30 | |
| HIGH | API Key Exfiltration during Setup The `scripts/setup.sh` script prompts the user for a Jellyseerr server URL and an API key. It then performs a connection test using `curl` to the provided `SERVER_URL`, including the `API_KEY` in the `X-Api-Key` header. If a malicious `SERVER_URL` is entered by the user (e.g., an attacker's server), the `API_KEY` will be sent to that server, leading to credential exfiltration. The connection test should only be performed against a trusted, validated Jellyseerr server URL. Alternatively, the API key should not be included in the connection test if the URL is untrusted. A safer approach would be to validate the URL format and then attempt a generic request without the API key, or only include the API key if the URL is known to be a legitimate Jellyseerr instance. | LLM | scripts/setup.sh:17 | |
| HIGH | User-controlled content in LLM output channel (Prompt Injection) The skill constructs notification messages using titles and subjects obtained from the Jellyseerr API or webhooks. These messages are then printed to `stdout` with the prefix `SEND_MESSAGE:`, which is likely interpreted by the host LLM as a command to send a message. If a malicious Jellyseerr server or an attacker can control the `title` (from search results in `monitor_availability.py`) or `subject` (from webhook payload in `webhook_server.py`), they could inject arbitrary instructions into the LLM's output, potentially leading to prompt injection. Sanitize or escape any user-controlled or external service-controlled content (`title`, `subject`) before including it in messages that are passed to the host LLM. Specifically, filter out or escape characters that could be interpreted as instructions or markdown formatting by the LLM. Consider using a dedicated tool call for sending messages rather than relying on `SEND_MESSAGE:` prefix if the LLM is susceptible to prompt injection via its output channel. | LLM | scripts/send_notifications.py:24 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/ericrosenberg/jellyseerr/scripts/monitor_availability.py:8 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/ericrosenberg/jellyseerr/scripts/request_movie.py:8 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/ericrosenberg/jellyseerr/scripts/request_tv.py:8 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/ericrosenberg/jellyseerr/scripts/search.py:8 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/ericrosenberg/jellyseerr/scripts/auto_monitor.sh:5 | |
| MEDIUM | Sensitive environment variable access: $USER Access to sensitive environment variable '$USER' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/ericrosenberg/jellyseerr/scripts/install_service.sh:21 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/ericrosenberg/jellyseerr/scripts/setup.sh:4 | |
| MEDIUM | Sensitive environment variable access: $USER Access to sensitive environment variable '$USER' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/ericrosenberg/jellyseerr/scripts/setup_webhook.sh:18 | |
| MEDIUM | Persistent execution via crontab modification The `SKILL.md` documentation suggests a method for setting up cron-based polling that directly modifies the user's crontab. This involves reading the current crontab, appending a new entry to execute `$(pwd)/scripts/auto_monitor.sh` every minute, and then writing it back. This grants persistent execution of a script within the skill package. While `auto_monitor.sh` itself is part of the skill, the ability to modify crontab is a high privilege operation. If the skill's environment or the `$(pwd)` variable could be manipulated by an attacker, this could lead to arbitrary command injection and persistent execution. Avoid direct modification of user crontabs. If persistent scheduling is required, consider using a more sandboxed or agent-managed scheduling mechanism. If crontab modification is unavoidable, ensure that the path to the script is absolute and not susceptible to manipulation (e.g., by using `readlink -f` or similar to get the absolute path of the script itself, rather than relying on `$(pwd)` which can change). | LLM | SKILL.md:59 | |
| MEDIUM | Systemd service installation requires sudo and uses hardcoded paths/users The `scripts/install_service.sh` and `scripts/setup_webhook.sh` scripts require `sudo` to install systemd services. This grants the skill root-level privileges to install and manage services, which is an excessive permission for a typical AI agent skill. While the scripts themselves execute Python files within the skill, the broad `sudo` access could be exploited if the skill's code were compromised or if the `ExecStart` command could be manipulated. Additionally, `install_service.sh` hardcodes the `SCRIPT_DIR` to `/home/clawd/clawd/skills/jellyseerr/scripts` and `USER="clawd"`, which is brittle and could lead to issues or unintended privilege escalation if the skill is run by a different user or installed in a different location. Minimize the use of `sudo`. If root privileges are strictly necessary, ensure the scope is as narrow as possible. For `install_service.sh`, replace hardcoded paths and usernames with dynamically determined values (e.g., `$(id -un)` for user, `$(dirname "${BASH_SOURCE[0]}")` for script directory) to improve portability and reduce potential for privilege issues. Consider alternative, less privileged methods for running background processes if possible within the agent's ecosystem. | LLM | scripts/install_service.sh:4 |
Scan History
Embed Code
[](https://skillshield.io/report/a9d93b74d9dbdd3f)
Powered by SkillShield