Trust Assessment
tunneling received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include Command Injection via unsanitized user input in SSH port forwarding, Command Injection via unsanitized user input in SSH subdomain specification, Untrusted "Usage Guidelines" may influence LLM behavior.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 18/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via unsanitized user input in SSH port forwarding The skill constructs an `ssh` command using user-provided input for the `<PORT>` placeholder. If this input is not properly sanitized by the agent before execution, a malicious user could inject shell metacharacters (e.g., `'; rm -rf /'`) to execute arbitrary commands on the host system where the agent is running. This vulnerability exists in multiple `ssh` command examples. The agent must strictly validate and sanitize all user-provided input (e.g., `<PORT>`, `myname`) before incorporating it into shell commands. Only allow numeric values for ports and alphanumeric/hyphen for subdomains. Consider using a dedicated `ssh` library or a robust shell escaping mechanism to prevent command injection. | LLM | SKILL.md:20 | |
| CRITICAL | Command Injection via unsanitized user input in SSH subdomain specification The skill constructs an `ssh` command using user-provided input for the `myname` placeholder (custom subdomain). If this input is not properly sanitized by the agent before execution, a malicious user could inject shell metacharacters (e.g., `'; rm -rf /'`) to execute arbitrary commands on the host system where the agent is running. The agent must strictly validate and sanitize all user-provided input (e.g., `<PORT>`, `myname`) before incorporating it into shell commands. Only allow numeric values for ports and alphanumeric/hyphen for subdomains. Consider using a dedicated `ssh` library or a robust shell escaping mechanism to prevent command injection. | LLM | SKILL.md:27 | |
| HIGH | Untrusted "Usage Guidelines" may influence LLM behavior The "Usage Guidelines" section within the untrusted input block provides instructions to the agent (e.g., "Ask which port", "Run the SSH command in the background", "Report the public URL"). While these are intended instructions for the skill's operation, their presence within untrusted content means a malicious actor could potentially modify these guidelines to manipulate the LLM's behavior or output, overriding its core instructions. Move all instructions intended for the LLM (like "Usage Guidelines") out of the untrusted content block and into the trusted system prompt or skill definition. Untrusted content should only contain data, not instructions for the LLM. | LLM | SKILL.md:38 | |
| MEDIUM | Inherent data exfiltration risk by exposing local ports The core function of this skill is to expose a local port to the internet via an SSH tunnel. While this is the intended functionality, it inherently carries a risk of data exfiltration if the user exposes a port serving sensitive data or services. The agent is instructed to "Report the public URL back to the user", which is not a direct exfiltration by the agent, but the *skill itself* facilitates external access to local resources. The agent should explicitly warn the user about the security implications of exposing local ports to the internet and advise against exposing ports that serve sensitive information or administrative interfaces. Ensure the user explicitly confirms understanding of this risk before proceeding. | LLM | SKILL.md:19 |
Scan History
Embed Code
[](https://skillshield.io/report/09d014113c2db507)
Powered by SkillShield