Trust Assessment
upstash-redis-kv received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include User-controlled bypass of security confirmation (YOLO Mode), Potential command injection due to unsanitized arguments in shell execution, Access to highly destructive Redis commands without robust safeguards.
The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. The llm_behavioral_safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 11, 2026 (commit 326f2466). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | User-controlled bypass of security confirmation (YOLO Mode) The skill explicitly instructs the LLM to bypass critical security confirmation prompts for destructive operations if the user provides specific phrases (e.g., 'YOLO mode', 'Don't ask for confirmation'). This allows an untrusted user to directly manipulate the LLM's safety mechanisms, leading to unconfirmed execution of potentially data-destroying commands like `FLUSHDB` or `FLUSHALL`. Remove the 'YOLO Mode' functionality. The LLM should always seek confirmation for destructive operations, regardless of user input, or implement a more robust, explicit, and secure opt-out mechanism (e.g., an explicit tool parameter, not a natural language prompt). | Unknown | SKILL.md:290 | |
| HIGH | Potential command injection due to unsanitized arguments in shell execution The skill instructs the LLM to execute shell commands using `bun run scripts/upstash-client.ts <command> [args...]`. The arguments (`<command> [args...]`) are implicitly derived from user input. Without explicit instructions for the LLM to sanitize or escape these arguments before constructing the shell command, a malicious user could inject arbitrary shell commands (e.g., `GET mykey; rm -rf /`) by crafting specific input, leading to arbitrary code execution on the host system. Add explicit instructions for the LLM to properly sanitize and escape all user-provided arguments before passing them to `bun run`. This might involve quoting arguments, using a safe argument parsing library, or restricting input to a predefined set of safe values. | Unknown | SKILL.md:10 | |
| MEDIUM | Access to highly destructive Redis commands without robust safeguards The skill provides direct access to `FLUSHDB` and `FLUSHALL` commands, which can irrevocably delete all data in the Redis instance(s). While the skill mentions these are 'DANGEROUS' and requires confirmation by default, the 'YOLO Mode' (a prompt injection vulnerability) allows a user to bypass this confirmation. This combination creates a significant risk of accidental or malicious data loss. 1. Remove the 'YOLO Mode' functionality (as per SS-LLM-001 remediation). 2. Consider restricting access to `FLUSHDB` and `FLUSHALL` entirely, or implement a multi-factor confirmation process for these specific commands that cannot be bypassed by simple natural language prompts. 3. If these commands are absolutely necessary, ensure the LLM is instructed to provide very explicit warnings and require a specific, non-natural-language confirmation (e.g., 'Type \'CONFIRM DELETE ALL DATA\' to proceed'). | Unknown | SKILL.md:260 |
Scan History
Embed Code
[](https://skillshield.io/report/f3f467cbed544556)
Powered by SkillShield