Trust Assessment
parakeet-mlx received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Unpinned `parakeet-mlx` dependency, Potential command injection via `parakeet-mlx` arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned `parakeet-mlx` dependency The skill's manifest specifies the installation of `parakeet-mlx` using `uv tool install parakeet-mlx` without a specific version constraint. This means that future installations or updates could pull a new, potentially malicious, version of the package without explicit review, introducing a supply chain risk. A compromised upstream package could lead to arbitrary code execution or data exfiltration. Pin the `parakeet-mlx` dependency to a specific, known-good version (e.g., `uv tool install parakeet-mlx==X.Y.Z`) in the skill's manifest. Regularly review and update the pinned version to incorporate security fixes. | LLM | manifest | |
| HIGH | Potential command injection via `parakeet-mlx` arguments The skill's primary function involves executing the `parakeet-mlx` command-line tool with arguments that can include user-provided file paths and shell wildcards (e.g., `*.mp3`). The provided documentation demonstrates such usage. If the LLM constructs these commands by directly interpolating untrusted user input without proper sanitization or escaping, a malicious user could inject arbitrary shell commands or arguments, leading to command injection and potential compromise of the host system. When constructing commands for `parakeet-mlx` based on user input, ensure all arguments are properly sanitized and escaped to prevent shell injection. If possible, use safe execution methods (e.g., `subprocess.run` with `shell=False` and passing arguments as a list) within the LLM's execution environment, or implement robust input validation and escaping for all user-controlled command arguments. | LLM | SKILL.md:8 |
Scan History
Embed Code
[](https://skillshield.io/report/fc56d161f21add32)
Powered by SkillShield