Trust Assessment
voice-ai-agents received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include User-controlled input directly used as LLM prompt/greeting.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | User-controlled input directly used as LLM prompt/greeting The `scripts/agent.js` CLI tool allows users to specify `--prompt` and `--greeting` arguments when creating or updating an AI agent. These values are directly passed to the Voice.ai API as the agent's system prompt and initial greeting. If a malicious user provides crafted input, it could manipulate the behavior of the underlying Large Language Model (LLM) used by the Voice.ai agent, leading to unintended actions, data disclosure, or other security breaches by the agent. The skill does not sanitize or validate these inputs before sending them to the API. Implement input validation and sanitization for `--prompt` and `--greeting` arguments to prevent malicious instructions from being passed to the LLM. Consider using LLM-specific prompt sanitization techniques or restricting the length/content of user-provided prompts. Alternatively, clearly document the risks of providing untrusted input to these parameters. | LLM | scripts/agent.js:100 |
Scan History
Embed Code
[](https://skillshield.io/report/774f1e9a1cca6393)
Powered by SkillShield