Trust Assessment
assemblyai-transcribe received a trust score of 50/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 4 high, 0 medium, and 0 low severity. Key findings include Sensitive path access: AI agent config, Potential Command Injection via unescaped user input in shell arguments, Arbitrary local file upload and read capability.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/tristanmanchester/assemblyai-transcribe/SKILL.md:19 | |
| HIGH | Potential Command Injection via unescaped user input in shell arguments The skill's usage examples in `SKILL.md` demonstrate invoking `assemblyai.mjs` with arguments like file paths, URLs, and JSON strings. If the host LLM directly interpolates user-provided input into these arguments without proper shell escaping, a malicious user could inject arbitrary shell commands. For example, a crafted file path like `'; rm -rf /; #` could break out of the quoted string and execute other commands. The `--config` argument, while parsed as JSON internally, is also vulnerable if its value is not properly escaped for the shell. The host LLM must ensure all user-provided arguments passed to shell commands are properly escaped to prevent command injection. For file paths and URLs, use `shlex.quote` or similar robust shell escaping mechanisms. For JSON strings, ensure they are correctly quoted and escaped within the shell command context. | LLM | SKILL.md:48 | |
| HIGH | Arbitrary local file upload and read capability The `assemblyai.mjs` script, as described in `SKILL.md`, allows uploading local files specified by a path (`transcribe "./path/to/audio.mp3"`) and reading local files for configuration (`--config @file`). The `uploadFile` function uses `fs.createReadStream` and `readConfigArg` uses `fsp.readFile` on paths derived from user input. If a malicious user can convince the host LLM to provide a path to a sensitive local file (e.g., `/etc/passwd`, `~/.ssh/id_rsa`), the skill will read and transmit its contents to the AssemblyAI API (or potentially a malicious endpoint if combined with base URL manipulation). The host LLM should implement strict validation and sanitization of file paths provided by users, restricting access to a designated sandbox directory or explicitly whitelisting allowed file types/locations. Avoid passing arbitrary user-controlled paths directly to file system operations. | LLM | assemblyai.mjs:160 | |
| HIGH | API Key exposure through malicious base URL The `assemblyai.mjs` script uses the `ASSEMBLYAI_API_KEY` in the `Authorization` header for all requests. The base URL for the AssemblyAI API can be overridden by the `ASSEMBLYAI_BASE_URL` environment variable or the `--base-url` command-line flag. If a malicious user can control either of these inputs, they could redirect API requests, including the `ASSEMBLYAI_API_KEY`, to an arbitrary server they control, leading to credential exfiltration. The host LLM should prevent user-controlled input from directly setting or influencing sensitive environment variables like `ASSEMBLYAI_BASE_URL` or command-line flags like `--base-url`. If customization of the base URL is necessary, validate the URL against a strict whitelist of trusted endpoints. | LLM | assemblyai.mjs:130 |
Scan History
Embed Code
[](https://skillshield.io/report/fd7c490b6d6eb3e6)
Powered by SkillShield