Trust Assessment
voice-transcribe received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via user-controlled audio file path, OpenAI API Key stored in local .env file.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via user-controlled audio file path The skill instructs the user to execute a local script `transcribe` with a user-controlled audio file path (`<audio-file>`). If the `transcribe` script (which is part of this skill package) does not properly sanitize or escape this input before using it in a shell command (e.g., via `subprocess.run(..., shell=True)` or `os.system()`), it could be vulnerable to command injection. An attacker could craft a malicious audio file path containing shell metacharacters to execute arbitrary commands on the host system. The `transcribe` script should be reviewed to ensure all user-provided arguments are properly sanitized and escaped before being used in shell commands. Prefer using `subprocess.run()` with `shell=False` and passing arguments as a list to prevent shell injection. | LLM | SKILL.md:10 | |
| MEDIUM | OpenAI API Key stored in local .env file The skill explicitly instructs the user to store their `OPENAI_API_KEY` directly in a `.env` file within the skill's directory. This makes the API key accessible to any code executed by the skill. While common for local development, it poses a risk if the skill's code is malicious or compromised, as it could read and exfiltrate this sensitive credential. This setup makes the credential available for potential harvesting. Consider using more secure methods for managing API keys, such as environment variables managed by the host system, a secrets management service, or prompting the user for the key at runtime if it's not persistently stored. If using a `.env` file, ensure the skill's code is thoroughly vetted and restrict its network access to only necessary endpoints. | LLM | SKILL.md:27 |
Scan History
Embed Code
[](https://skillshield.io/report/a391c6d21f92c472)
Powered by SkillShield