Trust Assessment
voice-ai-tts received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Arbitrary file write via user-controlled output path, Data exfiltration via user-controlled file path in voice cloning.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary file write via user-controlled output path The `scripts/tts.js` CLI tool and the `voice-ai-tts-sdk.js` methods `generateSpeechToFile` and `streamSpeechToFile` allow a user to specify an arbitrary output file path (`--output` argument in CLI, `outputPath` parameter in SDK). This path is directly used with `fs.createWriteStream`, which can overwrite existing files or create new files in arbitrary locations on the filesystem where the skill has write permissions. An attacker could exploit this to overwrite critical system files, configuration files, or inject malicious content, leading to denial of service, privilege escalation, or further compromise. Implement strict validation and sanitization of file paths. Restrict output paths to a designated, sandboxed directory. Disallow absolute paths or paths containing `..`. Consider using a temporary file system or requiring explicit user confirmation for file writes outside a designated area. | LLM | scripts/tts.js:30 | |
| HIGH | Data exfiltration via user-controlled file path in voice cloning The `voice-ai-tts-sdk.js` method `cloneVoice` accepts a `file` parameter, which is a path to a local audio sample. This file is read using `fs.createReadStream` and then uploaded to the Voice.ai API for voice cloning. If the skill's implementation of the `/clone` chat command (or any other exposed functionality) allows an untrusted user to specify this `file` path, an attacker could provide a path to any readable file on the system (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `~/.aws/credentials`). The contents of this sensitive file would then be read and transmitted to the third-party Voice.ai service, leading to data exfiltration. If the `cloneVoice` functionality is exposed to user input, strictly validate and sanitize the `file` path. Only allow files from a designated, sandboxed upload directory. Disallow absolute paths or paths containing `..`. If possible, require explicit user confirmation before uploading arbitrary local files to a third-party service. For the `/clone <audio_url>` command, ensure it only accepts URLs and downloads the content to a temporary, isolated location before processing, rather than accepting local file paths. | LLM | voice-ai-tts-sdk.js:390 |
Scan History
Embed Code
[](https://skillshield.io/report/96a19e3c4c362fc6)
Powered by SkillShield