Trust Assessment
sandboxer received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 3 critical, 1 high, 0 medium, and 0 low severity. Key findings include `tmux` command injection via session name, Direct command injection into `tmux` session, Shell command injection via `curl` URL parameters.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | `tmux` command injection via session name The skill constructs `tmux` commands by directly embedding user-provided session names (e.g., `SESSION_NAME`) into the `-t` argument. An attacker can inject arbitrary `tmux` commands by providing a malicious session name containing shell metacharacters or `tmux` command separators (e.g., `"; new-window; send-keys 'evil' Enter;"`). This allows for arbitrary command execution on the host system. Sanitize or escape user-provided session names before embedding them into `tmux` commands. Consider using a `tmux` API or a wrapper that handles escaping. | LLM | SKILL.md:39 | |
| CRITICAL | Direct command injection into `tmux` session The skill uses `tmux send-keys -t "SESSION_NAME" "implement feature X" Enter` to forward user requests to a running `tmux` session. The string `"implement feature X"` is directly taken from user input and sent as keystrokes to the session. If the `tmux` session is running a shell, this allows an attacker to execute arbitrary commands within that session, effectively gaining control over the environment where the session is running. Do not directly forward unsanitized user input as commands to a shell. If interaction with a session is required, implement a secure command execution mechanism that validates and sanitizes input, or uses a more controlled API. | LLM | SKILL.md:45 | |
| CRITICAL | Shell command injection via `curl` URL parameters The skill constructs `curl` commands with user-provided parameters (e.g., `dir` in `create` and `session` in `kill`) directly embedded into the URL string. If these parameters contain shell metacharacters (e.g., `"; rm -rf /; echo "`), an attacker can break out of the URL string and execute arbitrary shell commands on the host system. Sanitize or properly escape user-provided parameters before embedding them into shell commands, especially within double-quoted strings. Consider using a `curl` library or a more robust method for making HTTP requests that handles parameter encoding automatically. | LLM | SKILL.md:53 | |
| HIGH | Recommendation for privileged operations and untrusted code installation The skill's installation instructions recommend two highly privileged and potentially dangerous actions:
1. `claude --dangerously-skip-permissions "clone github.com/chriopter/sandboxer..."`: This instructs the host LLM to bypass its own security mechanisms to clone and install code from an external, untrusted GitHub repository. This introduces a significant supply chain risk, as the LLM could install malicious software without proper vetting.
2. `sudo systemctl start sandboxer`: This instructs the LLM to execute a command with `sudo` privileges, which grants root access. If the LLM is allowed to follow this instruction, it represents a severe privilege escalation vulnerability. For `claude --dangerously-skip-permissions`: Remove this instruction. LLMs should not be instructed to bypass their own security controls for installing software. If installation is necessary, it should be done through a secure, sandboxed, and vetted process. For `sudo systemctl start sandboxer`: Remove the `sudo` requirement or clearly state that this action requires manual user confirmation and should not be automated by the LLM. Skills should operate with the least privilege necessary. | LLM | SKILL.md:25 |
Scan History
Embed Code
[](https://skillshield.io/report/cb5c63251271eb95)
Powered by SkillShield