Trust Assessment
vercel received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 8 findings: 3 critical, 2 high, 3 medium, and 0 low severity. Key findings include File read + network send exfiltration, Sensitive path access: Environment file, Potential for Command Injection through Vercel CLI arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings8
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration .env file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/leonaaardob/lb-vercel-skill/SKILL.md:120 | |
| CRITICAL | File read + network send exfiltration .env file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/leonaaardob/lb-vercel-skill/SKILL.md:129 | |
| CRITICAL | Potential for Command Injection through Vercel CLI arguments The skill exposes a wide range of Vercel CLI commands, many of which accept user-controlled arguments (e.g., paths, names, IDs, URLs, environment variable keys/values). If an AI agent directly interpolates untrusted user input into these command arguments without proper sanitization or validation, it could lead to arbitrary command execution on the host system. For example, a user could inject shell commands into arguments like `[path]`, `<name>`, `<domain>`, or environment variable values. Implement strict input validation and sanitization for all user-provided arguments before constructing and executing Vercel CLI commands. Avoid direct interpolation of untrusted input. Consider using an allowlist for argument values or escaping shell metacharacters. For paths, ensure they are within expected directories and do not contain traversal sequences. | LLM | SKILL.md:40 | |
| HIGH | Excessive Permissions granted by broad Vercel CLI access The skill provides access to the full Vercel CLI, which includes commands for deploying applications, managing projects, domains, environment variables, and even deleting resources (`vercel rm`, `vercel projects remove`, `vercel domains remove`) or initiating financial transactions (`vercel domains buy`, `vercel domains transfer-in`). If an AI agent allows users to invoke these commands without sufficient authorization checks or restrictions, it effectively grants users excessive permissions over the linked Vercel account, potentially leading to unauthorized resource modification, deletion, or financial charges. Implement fine-grained access control and authorization mechanisms within the AI agent. Restrict which Vercel CLI commands and arguments users can invoke based on their roles or permissions. Confirm critical operations (e.g., deletion, financial transactions) with the user or an administrator before execution. Consider sandboxing the execution environment or using Vercel API tokens with limited scopes if possible. | LLM | SKILL.md:100 | |
| HIGH | Potential for Command Injection/SSRF via `curl` URL path The skill documents `curl -s "https://vercel.com/docs/<path>"`. If the `<path>` component of the URL is directly derived from untrusted user input without validation, a malicious user could inject arbitrary URLs or `curl` options. This could lead to Server-Side Request Forgery (SSRF) if the user provides an internal IP address or a different domain, or command injection if `curl` is executed in a shell that allows for command chaining within the URL string. Validate the `<path>` argument to ensure it only contains expected characters and does not allow for arbitrary URL schemes, hostnames, or path traversal. Ideally, allowlist specific paths or use a URL parsing library to ensure only the intended `vercel.com/docs/` domain is accessed. | LLM | SKILL.md:30 | |
| MEDIUM | Sensitive path access: Environment file Access to Environment file path detected: '.env.local'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/leonaaardob/lb-vercel-skill/SKILL.md:120 | |
| MEDIUM | Sensitive path access: Environment file Access to Environment file path detected: '.env.local'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/leonaaardob/lb-vercel-skill/SKILL.md:129 | |
| MEDIUM | Potential Data Exfiltration via `vercel env pull` The `vercel env pull [filename]` command allows pulling environment variables from Vercel into a local file. If the `[filename]` argument is user-controlled and not properly validated, a malicious user could specify a path outside the intended directory (e.g., using path traversal like `../../.ssh/id_rsa`) or overwrite sensitive system files. While this command itself doesn't directly exfiltrate data, it could be used as a precursor to exfiltration if the agent subsequently reads or exposes the content of the specified file. Strictly validate the `[filename]` argument to ensure it refers to a safe, non-sensitive location within the intended working directory. Prevent path traversal sequences (e.g., `..`, `/`) and disallow absolute paths. Consider using a fixed filename or a temporary file in a secure location. | LLM | SKILL.md:128 |
Scan History
Embed Code
[](https://skillshield.io/report/9edb90bc00ebf1c6)
Powered by SkillShield