Trust Assessment
transcribee received a trust score of 83/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via `transcribee` arguments, Exposure of potential API key location.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via `transcribee` arguments The skill's documentation instructs the LLM to construct and execute `transcribee` commands using user-provided URLs or file paths. If the LLM does not properly sanitize or escape user input before passing it as an argument to `transcribee`, a malicious user could inject arbitrary shell commands. The documentation's advice to 'Always quote URLs containing `&` or special characters' highlights the sensitivity of the underlying command to shell metacharacters, increasing the risk of command injection if the LLM fails to handle input securely. The LLM should be explicitly instructed to sanitize and properly escape all user-provided arguments before passing them to external commands like `transcribee`. Implement robust input validation and use safe command execution methods (e.g., `subprocess.run` with `shell=False` and arguments passed as a list) in the underlying `transcribee` tool. | LLM | SKILL.md:11 | |
| MEDIUM | Exposure of potential API key location The skill's troubleshooting section explicitly mentions checking the `.env` file in the `transcribee` directory for API errors, indicating that sensitive API keys or credentials might be stored there. This information, now known to the LLM through the skill's documentation, could be leveraged by a malicious prompt to instruct the LLM to read and exfiltrate the contents of this `.env` file, leading to credential harvesting. Avoid mentioning specific file paths for sensitive credentials in public documentation. Instead, refer to general configuration practices or secure credential management systems. If an `.env` file is necessary, ensure it's not accessible by the LLM or that the LLM is strictly forbidden from reading its contents. | LLM | SKILL.md:44 |
Scan History
Embed Code
[](https://skillshield.io/report/7d14e4990daec6ac)
Powered by SkillShield