Trust Assessment
vexa received a trust score of 35/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 3 critical, 1 high, 0 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, API Key requested in chat and stored in file/passed via CLI, Unsanitized webhook payload leads to shell command injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/dmitriyg228/vexa/scripts/onboard.mjs:303 | |
| CRITICAL | Unsanitized webhook payload leads to shell command injection The `scripts/vexa-transform.mjs` script constructs a shell command (`reportCmd`) using `platform` and `native_meeting_id` values extracted directly from an untrusted webhook payload. This command is then explicitly instructed to be executed by the agent. An attacker can send a malicious webhook payload containing shell metacharacters in `native_meeting_id` (e.g., `"; rm -rf /; #"`) to execute arbitrary commands on the host system. 1. Before embedding untrusted input into a shell command string, always sanitize or escape it using a function appropriate for the target shell (e.g., `shell-quote` or similar library). 2. Alternatively, pass arguments as an array to `spawn` or `spawnSync` (e.g., `['node', 'script.mjs', '--platform', platform, '--native_meeting_id', nativeMeetingId]`) to avoid shell interpretation of arguments. The LLM should be instructed to run commands using a safe execution mechanism. | LLM | scripts/vexa-transform.mjs:60 | |
| CRITICAL | Raw untrusted webhook payload embedded in LLM message, enabling prompt injection The `scripts/vexa-transform.mjs` script includes the full, raw webhook payload (`ctx.payload`) directly into the message sent to the LLM, using `JSON.stringify`. As this payload is untrusted content, an attacker could embed malicious instructions (e.g., "ignore previous instructions", "act as a different persona") within the JSON data. The LLM, when processing this message, could interpret these embedded instructions as new directives, leading to prompt injection and potential compromise of its behavior or data. 1. Avoid including raw, untrusted input directly in messages sent to the LLM. 2. If raw data must be included, strictly sanitize it to remove any text that could be interpreted as instructions or commands by the LLM. 3. Consider using specific delimiters or structured formats (e.g., XML tags) that the LLM is explicitly trained to treat as data, not instructions. | LLM | scripts/vexa-transform.mjs:74 | |
| HIGH | API Key requested in chat and stored in file/passed via CLI The skill instructs the LLM to ask the user for their `VEXA_API_KEY` directly in chat and then write it to `skills/vexa/secrets/vexa.env`. This practice is a direct credential harvesting risk, as the LLM could be manipulated to log or misuse the key. Furthermore, the `onboard.mjs` script allows passing the API key via a command-line argument (`--api_key <key>`), which exposes the secret in process listings (`ps aux`) on the system, leading to data exfiltration. 1. Avoid asking for API keys directly in chat. Instead, instruct users to set environment variables or use a secure secrets management system. 2. If file storage is necessary, ensure the file is created with strict permissions (e.g., `0o600`) and is only accessible by the agent. 3. Never pass sensitive credentials like API keys as command-line arguments. Use environment variables or secure input methods. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/a85284cbe48d269a)
Powered by SkillShield