Trust Assessment
tailscale received a trust score of 43/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include Sensitive path access: AI agent config, Sensitive environment variable access: $HOME, Command Injection via `resolve_device` output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via `resolve_device` output The `resolve_device` function in `scripts/ts-api.sh`, intended to resolve device IDs, returns the original user input if no matching device is found. This returned value (`$id`) is then directly used in constructing API endpoints within the `api` function (e.g., `api GET "/device/${id}"`). If the initial user input contains shell metacharacters (e.g., `$(command)`, backticks, or `;`), these will be expanded and executed by the shell before the `api` function's `curl` command is invoked, leading to arbitrary command injection. This allows an attacker to execute any command with the privileges of the script. This vulnerability affects commands like `device`, `authorize`, `delete`, `tags`, and `routes`. The `resolve_device` function should either strictly validate its input to ensure it's a valid device ID or hostname, or it should return an error if no device is found, rather than returning the unsanitized input. If the input is intended to be used as part of a URL path, it must be URL-encoded before being embedded. A robust fix would be to ensure that any variable used in constructing a command string is properly quoted or escaped to prevent shell expansion, or to use `printf %q` for shell arguments. For `resolve_device`, it should return an empty string or error if no valid ID is found, forcing the calling function to handle the error, rather than propagating potentially malicious input. | LLM | scripts/ts-api.sh:116 | |
| CRITICAL | Command Injection via `acl-validate` file path The `cmd_acl_validate` function in `scripts/ts-api.sh` directly uses the user-provided file path argument (`$file`) in a `curl --data-binary "@${file}"` command. If the `$file` argument contains shell metacharacters (e.g., `$(command)` or backticks), these will be expanded by the shell before `curl` is executed, leading to arbitrary command injection. This allows an attacker to execute any command with the privileges of the script. Sanitize the `$file` argument to ensure it's a safe file path, or read the file content into a variable and pass it to `curl` via standard input (e.g., `curl ... --data-binary @- <<< "$file_content"`). This prevents shell expansion of the file path itself. Alternatively, ensure the file path is properly quoted for the shell (e.g., `"@$(printf %q "$file")"`) if passing it directly to `curl` is necessary. | LLM | scripts/ts-api.sh:242 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/jmagar/tailscale/SKILL.md:8 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/jmagar/tailscale/scripts/ts-api.sh:7 |
Scan History
Embed Code
[](https://skillshield.io/report/d5792bea158f9b11)
Powered by SkillShield