Trust Assessment
youtube-transcript received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Prompt Injection Attempt: Directives to LLM from Untrusted Source, Prompt Injection & Potential Data Exfiltration/Arbitrary File Write, Prompt Injection: Behavioral Directives to LLM from Untrusted Source.
The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. The llm_behavioral_safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 11, 2026 (commit 326f2466). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection Attempt: Directives to LLM from Untrusted Source The `SKILL.md` file, which is treated as untrusted input, contains direct instructions intended for the host LLM. Specifically, the line 'CRITICAL: YOU MUST NEVER MODIFY THE RETURNED TRANSCRIPT' attempts to override the LLM's behavior and impose constraints from an untrusted source. This is a classic prompt injection technique designed to manipulate the LLM's operational guidelines. Remove all directives and instructions intended for the host LLM from the untrusted `SKILL.md` content. LLM instructions should only originate from trusted system prompts or configuration. | Unknown | SKILL.md:23 | |
| HIGH | Prompt Injection & Potential Data Exfiltration/Arbitrary File Write The `SKILL.md` file, treated as untrusted input, contains an instruction for the LLM to 'save the transcript to a specific file' if requested, using a 'requested file' path. This is a prompt injection attempt to instruct the LLM to perform a file write operation based on potentially malicious user input. If the LLM has file system write capabilities, this could lead to arbitrary file writes, data exfiltration (writing sensitive transcripts to attacker-controlled locations), or overwriting critical system files. Remove all directives for file operations from untrusted content. If file saving is a required feature, it must be implemented as a controlled tool with strict path sanitization and access controls, not as a direct LLM instruction from untrusted input. The LLM should never be instructed to write to arbitrary paths provided by users. | Unknown | SKILL.md:25 | |
| MEDIUM | Prompt Injection: Behavioral Directives to LLM from Untrusted Source The `SKILL.md` file, which is treated as untrusted input, contains instructions for the host LLM regarding how to process and format the transcript ('If the transcript is without timestamps, you SHOULD clean it up...'). These are attempts to influence the LLM's output generation and formatting behavior from an untrusted source, which constitutes prompt injection. Remove all directives and instructions intended for the host LLM from the untrusted `SKILL.md` content. LLM instructions should only originate from trusted system prompts or configuration. | Unknown | SKILL.md:24 |
Scan History
Embed Code
[](https://skillshield.io/report/fcb363cc7a1994db)
Powered by SkillShield