Trust Assessment
walkie-talkie received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Prompt Injection via User Audio Transcription, Command Injection Risk in `transcribe_voice.sh`, Command Injection Risk in `bin/sherpa-onnx-tts`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User Audio Transcription The skill explicitly states that transcribed text from user-provided audio will be "processed as a normal user prompt." This creates a direct vector for prompt injection, where a malicious user could send an audio message designed to transcribe into instructions that manipulate the host LLM, bypass safety mechanisms, or extract sensitive information. Implement robust input sanitization and validation on the transcribed text before it is fed to the LLM. Consider using a separate, sandboxed LLM instance or a highly restricted prompt template for user-generated content. Clearly define the boundaries of what the LLM can do with this input. | LLM | SKILL.md:12 | |
| HIGH | Command Injection Risk in `transcribe_voice.sh` The skill uses `tools/transcribe_voice.sh` to process user-provided audio files. If the filename or path of the incoming audio is directly passed to this shell script without proper sanitization and escaping, a malicious user could craft a filename containing shell metacharacters (e.g., `;`, `|`, `&`, `$(...)`) to execute arbitrary commands on the host system. Ensure that any user-controlled input (like filenames or paths) passed to `tools/transcribe_voice.sh` is rigorously sanitized and properly escaped. Prefer using `subprocess.run` with `shell=False` and passing arguments as a list, rather than constructing a shell command string. Validate filenames to only allow safe characters. | LLM | SKILL.md:11 | |
| HIGH | Command Injection Risk in `bin/sherpa-onnx-tts` The skill generates speech using `bin/sherpa-onnx-tts` with text that originates from the LLM's response, which itself can be influenced by user input (via prompt injection). If the text passed to the TTS tool's command-line arguments is not properly sanitized and escaped, a malicious user could exploit this by injecting shell metacharacters into their prompt, leading to arbitrary command execution on the host system. The manual execution example `bin/sherpa-onnx-tts /tmp/reply.ogg "Tu mensaje aquí"` highlights this vulnerability if "Tu mensaje aquí" is untrusted. All text generated by the LLM that is subsequently used in shell commands must be thoroughly sanitized and escaped to prevent command injection. Use safe subprocess execution methods (e.g., `subprocess.run` with `shell=False` and passing arguments as a list). | LLM | SKILL.md:16 | |
| MEDIUM | Potential Data Exfiltration via Arbitrary File Sending The skill's workflow involves sending generated `.ogg` files back to the user via a `message` tool with a `filePath` argument. If a successful command injection or prompt injection attack allows the LLM to specify an arbitrary `filePath` outside the intended output directory (e.g., `/etc/passwd`, `/app/secrets.txt`), sensitive files could be exfiltrated to the user disguised as audio messages. Restrict the `message` tool's `filePath` argument to only allow files within a specific, sandboxed output directory. Implement strict validation to prevent path traversal (`../`) or absolute path specifications by the LLM. Ensure the LLM cannot directly control the `filePath` argument with arbitrary values. | LLM | SKILL.md:27 |
Scan History
Embed Code
[](https://skillshield.io/report/4aeb09a00f1ea8f5)
Powered by SkillShield