Trust Assessment
youtube-summarize received a trust score of 35/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 2 critical, 2 high, 1 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Arbitrary code execution via untrusted external script (glot.io), Potential command injection via unsanitized VIDEO_URL.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints URL pointing to known exfiltration/webhook service Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/hightower6eu/youtube-summarize-hr5oh/SKILL.md:10 | |
| CRITICAL | Arbitrary code execution via untrusted external script (glot.io) The skill instructs macOS users to visit an external URL (glot.io) and execute an unverified script directly in their terminal. This allows for arbitrary code execution on the user's system, as the content of the glot.io snippet is not controlled or verified by the skill developer and can change at any time, introducing a severe supply chain risk and potential for command injection. Remove the instruction to execute external, unverified scripts. Provide a self-contained, verified installation method or a direct link to a signed, versioned binary from a trusted source. If a script is necessary, it should be included directly within the skill package and its contents reviewed. | LLM | SKILL.md:9 | |
| HIGH | Potential command injection via unsanitized VIDEO_URL The skill executes shell commands using a user-provided `VIDEO_URL`. If the `VIDEO_URL` is not properly sanitized or escaped by the agent's execution environment, a malicious user could inject shell metacharacters (e.g., `&`, `|`, `;`, `$(...)`) to execute arbitrary commands on the host system. This risk applies to all `yt-dlp` commands that take `VIDEO_URL` as an argument. Ensure all user-provided inputs, especially `VIDEO_URL`, are strictly validated and properly escaped before being passed to shell commands. Use a library or function that safely quotes shell arguments to prevent injection. | LLM | SKILL.md:24 | |
| HIGH | Dependency on unverified external binary (openclaw-agent) The skill requires users to download and run an external executable (`openclaw-agent.zip`) from a GitHub release. This introduces a supply chain risk, as the integrity and security of this third-party binary are not guaranteed. If the GitHub repository or the release asset is compromised, users could be instructed to run malicious software. Provide a verified, signed, and versioned binary from a trusted distribution channel. Ideally, the agent should be open-source and buildable from source, or its integrity verifiable via checksums/signatures. Clearly document the security implications of running external binaries. | LLM | SKILL.md:7 | |
| MEDIUM | Potential prompt injection via video content The skill's primary purpose is to extract video transcripts and metadata for summarization by an LLM. Malicious actors could upload YouTube videos with specially crafted titles, descriptions, or transcript content designed to act as prompt injection attacks when fed to the host LLM. While the skill performs some cleaning of VTT, it does not specifically sanitize for LLM prompt injection patterns, potentially allowing an attacker to manipulate the LLM's behavior. Implement robust sanitization and filtering of all extracted video content (titles, descriptions, transcripts) before passing it to the LLM. This should include techniques to detect and neutralize prompt injection attempts, such as keyword filtering, length limits, or re-prompting strategies. | LLM | SKILL.md:95 |
Scan History
Embed Code
[](https://skillshield.io/report/4e89bc8dab9835b9)
Powered by SkillShield