Trust Assessment
walkie-talkie received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Direct Prompt Injection via Transcribed Audio, Command Injection via TTS Output, Command Injection via `transcribe_voice.sh` script.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct Prompt Injection via Transcribed Audio The skill explicitly states to 'Process the text as a normal user prompt' after transcribing user audio. This allows a malicious user to send audio containing instructions that, once transcribed, could directly manipulate the host LLM's behavior, leading to prompt injection attacks (e.g., data exfiltration, unauthorized actions, or overriding system instructions). Implement strict sanitization and validation of transcribed text before it is used as a prompt. Consider using a dedicated, sandboxed LLM call for user-generated content or explicitly defining a system prompt that cannot be overridden by user input. Ensure a clear separation between user input and system instructions. | LLM | SKILL.md:13 | |
| HIGH | Command Injection via TTS Output The skill instructs the LLM to 'generate speech using `bin/sherpa-onnx-tts`' and provides an example `bin/sherpa-onnx-tts /tmp/reply.ogg "Tu mensaje aquí"`. If the LLM's generated text (which is user-controlled via the prompt) is directly embedded into a shell command without proper escaping, it could lead to command injection. A malicious LLM response could include shell metacharacters, executing arbitrary commands on the host system. Ensure that any LLM-generated text passed to shell commands is rigorously escaped or sanitized. Prefer using a library function that handles command arguments safely (e.g., `subprocess.run` with `shell=False` and passing arguments as a list) rather than constructing a raw shell string. | LLM | SKILL.md:28 | |
| HIGH | Command Injection via `transcribe_voice.sh` script The skill uses `tools/transcribe_voice.sh` to process user-provided audio. If the script does not properly sanitize or escape the input audio file path (which is derived from user input) before using it in internal shell commands, a specially crafted filename could lead to command injection. For example, an audio file named `audio.ogg; rm -rf /` could execute arbitrary commands. Review the `tools/transcribe_voice.sh` script to ensure all inputs, especially file paths derived from user input, are properly quoted and escaped when used in shell commands. Ideally, avoid directly embedding user-controlled strings into shell commands. Implement robust input validation for filenames. | LLM | SKILL.md:12 |
Scan History
Embed Code
[](https://skillshield.io/report/daa46015f278ebf0)
Powered by SkillShield