Trust Assessment
transcribe received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 10 findings: 3 critical, 3 high, 3 medium, and 1 low severity. Key findings include Unsafe environment variable passthrough, Credential harvesting, Sensitive environment variable access: $HOME.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 11/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings10
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | cli-tool/components/skills/media/transcribe/scripts/transcribe_diarize.py:34 | |
| CRITICAL | Arbitrary Local File Upload/Exfiltration via Audio Input The `transcribe_diarize.py` script allows users to specify arbitrary local file paths for the `audio` argument. The script then reads the content of these files (`audio_path.open("rb")`) and sends them directly to the OpenAI API for transcription. An attacker could use this to exfiltrate sensitive local files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`) by providing their paths as audio inputs. Implement strict validation for audio file paths. Restrict file access to a designated, isolated directory (e.g., a temporary upload folder) or enforce that paths must be relative to a secure base directory. Additionally, inform the user explicitly that provided audio files will be uploaded to an external service. | Static | scripts/transcribe_diarize.py:182 | |
| CRITICAL | Arbitrary Local File Upload/Exfiltration via Known Speaker References The `transcribe_diarize.py` script allows users to specify arbitrary local file paths for `known-speaker` references (`NAME=PATH`). The script reads the content of these files (`path.read_bytes()`), base64 encodes them, and sends them to the OpenAI API as `known_speaker_references`. This allows an attacker to exfiltrate sensitive local files by providing their paths as speaker references. Implement strict validation for known speaker reference file paths. Restrict file access to a designated, isolated directory or enforce that paths must be relative to a secure base directory. Additionally, inform the user explicitly that provided reference files will be uploaded to an external service. | Static | scripts/transcribe_diarize.py:94 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | cli-tool/components/skills/media/transcribe/scripts/transcribe_diarize.py:34 | |
| HIGH | Path Traversal Vulnerability in Output Directory/File The `_build_output_path` function constructs output file paths using user-provided `--out` or `--out-dir` arguments without sufficient sanitization. An attacker could use path traversal sequences (e.g., `../../`) in these arguments to write transcription output to arbitrary locations on the filesystem, potentially overwriting critical system files or placing malicious content in unexpected directories. Sanitize `--out` and `--out-dir` arguments to prevent path traversal. Ensure that the resulting paths are canonicalized and strictly confined to an allowed output directory (e.g., `output/transcribe/` as suggested in `SKILL.md`) or the current working directory. Use `pathlib.Path.resolve()` and check if the resolved path starts with an allowed base directory. | Static | scripts/transcribe_diarize.py:128 | |
| HIGH | LLM analysis found no issues despite critical deterministic findings Deterministic layers flagged 3 CRITICAL findings, but LLM semantic analysis returned clean. This may indicate prompt injection or analysis evasion. | LLM | (sanity check) | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | cli-tool/components/skills/media/transcribe/SKILL.md:43 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| MEDIUM | Unpinned Dependency in Skill Manifest The skill's `SKILL.md` specifies the `openai` package as a dependency without a version pin. This can lead to supply chain risks, as future versions of the package might introduce breaking changes, vulnerabilities, or unexpected behavior. It also makes builds non-deterministic. Pin the `openai` dependency to a specific, known-good version (e.g., `openai==1.10.0`) to ensure deterministic builds and mitigate risks from unexpected updates. Update the `SKILL.md` to reflect this. | Static | SKILL.md:57 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/3cae2a0ab428459d)
Powered by SkillShield