Trust Assessment
homepod-tts received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Shell Command Injection via User Input, Prompt Injection in TTS Model Input, Credential Exposure via Command Injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Shell Command Injection via User Input The `scripts/play-tts.sh` script directly uses the first command-line argument (`$1`, assigned to `TEXT`) in a `python3` command without proper sanitization or quoting. An attacker can inject arbitrary shell commands by crafting the `TEXT` input with shell metacharacters (e.g., `;`, `&`, `|`, `$()`). This allows for arbitrary code execution on the host system. Sanitize or properly quote user-provided input (`TEXT`) before passing it to shell commands. For example, use `printf %q` to quote the variable for shell execution, or pass arguments directly to `exec` in Python if possible, avoiding shell interpretation. A safer approach would be to pass the text directly to the Python script as an argument, and let the Python script handle its own argument parsing, rather than relying on shell quoting for a complex string. | LLM | scripts/play-tts.sh:56 | |
| HIGH | Prompt Injection in TTS Model Input The `tts/tts_sample.py` script takes user-provided text (`args.text`) and a derived instruction (`instruct`) and passes them directly to the `Qwen3TTSModel.generate_voice_clone` method. While a TTS model may not execute arbitrary code, an attacker could craft malicious input to manipulate the model's behavior, generating unintended or harmful audio content (e.g., offensive speech, misinformation, or specific tones/styles not intended by the user). The `instruct` parameter, derived from user text, is particularly vulnerable as it directly controls the model's generation style. Implement input validation and sanitization for `args.text` and `instruct` before passing them to the TTS model. Consider using content moderation filters or limiting the scope of instructions the model can receive. If possible, restrict the `instruct` parameter to a predefined set of safe options rather than deriving it directly from arbitrary user input. | LLM | tts/tts_sample.py:100 | |
| HIGH | Credential Exposure via Command Injection The `HASS_TOKEN` (Home Assistant access token) is loaded from environment variables or a `.env` file and used in `curl` commands. Due to the shell command injection vulnerability in `scripts/play-tts.sh` (SS-CMD-001), an attacker could inject commands to exfiltrate this sensitive token to an external server or log it to a location accessible by the attacker. This token grants access to the Home Assistant instance. Address the underlying shell command injection vulnerability (SS-CMD-001) to prevent arbitrary code execution. Additionally, consider using more secure methods for handling credentials, such as short-lived tokens or secrets management systems, and ensure that tokens are not inadvertently logged or exposed in error messages. | LLM | scripts/play-tts.sh:31 | |
| MEDIUM | Untrusted Model Download from External Source The `tts/tts_sample.py` script downloads a pre-trained model (`Qwen/Qwen3-TTS-12Hz-0___6B-Base`) from an external source using `Qwen3TTSModel.from_pretrained`. While this is common practice, it introduces a supply chain risk. If the source repository or the model itself is compromised, it could lead to the execution of malicious code during model loading or inference, or result in the generation of unintended outputs. There is no explicit mechanism to verify the integrity or authenticity of the downloaded model. Implement mechanisms to verify the integrity and authenticity of downloaded models (e.g., by checking cryptographic hashes against known good values). Consider hosting models in a trusted, controlled environment or using a private model registry. Regularly audit the upstream source for security vulnerabilities or compromises. | LLM | tts/tts_sample.py:90 |
Scan History
Embed Code
[](https://skillshield.io/report/296d2ea42f0a830d)
Powered by SkillShield