Trust Assessment
vapi-skill received a trust score of 27/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 0 high, 2 medium, and 0 low severity. Key findings include Arbitrary command execution, Missing required field: name, Remote code execution: curl/wget pipe to shell.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Remote code download piped to interpreter Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/colygon/vapi-skill/SKILL.md:85 | |
| CRITICAL | Remote code execution: curl/wget pipe to shell Detected a pattern that downloads and immediately executes remote code. This is a primary malware delivery vector. Never pipe curl/wget output directly to a shell interpreter. | Static | skills/colygon/vapi-skill/SKILL.md:85 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/colygon/vapi-skill/SKILL.md:1 | |
| MEDIUM | Unsafe CLI installation via curl | bash The skill documentation instructs users to install the Vapi CLI by piping a script directly from a URL into bash (`curl | bash`). This method is inherently risky as it executes arbitrary code from an external source without prior review. If the remote server (vapi.ai) were compromised, a malicious script could be executed on the user's system, leading to supply chain compromise. An AI agent tasked with setting up the environment might execute this command without understanding the risks. Recommend a safer installation method, such as downloading the script, reviewing its contents, and then executing it, or using a package manager if available. Alternatively, provide a hash of the expected script content for verification. | LLM | SKILL.md:42 |
Scan History
Embed Code
[](https://skillshield.io/report/ee15a319e5092f24)
Powered by SkillShield