Trust Assessment
voice-wake-say received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via `say` arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via `say` arguments The skill instructs the use of `say -v "$SAY_VOICE"` and `say -r "$SAY_RATE"` for optional controls. If the variables `$SAY_VOICE` or `$SAY_RATE` are populated from untrusted user input without proper sanitization, an attacker could inject arbitrary shell commands. By including shell metacharacters (e.g., `;`, `&`, `|`, `$(...)`) within these variables, an attacker could execute arbitrary commands on the host system. While `$SPOKEN_TEXT` is safely handled by `printf '%s'`, the direct use of other variables as arguments to `say` in a shell context is vulnerable. Ensure that `$SAY_VOICE` and `$SAY_RATE` are either hardcoded, derived from a trusted allowlist, or strictly sanitized to prevent shell metacharacters if they can be influenced by untrusted input. A safer approach is to use a programming language's safe subprocess execution functions (e.g., `subprocess.run` with `shell=False` in Python, passing arguments as a list) instead of constructing shell command strings directly. | LLM | SKILL.md:42 |
Scan History
Embed Code
[](https://skillshield.io/report/9525ef207edfa399)
Powered by SkillShield