Trust Assessment
youtube-factory received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 2 critical, 3 high, 0 medium, and 0 low severity. Key findings include User input directly embedded in LLM prompt, User-controlled argument passed to external command without sanitization, LLM-generated content passed to external command without sanitization.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | User input directly embedded in LLM prompt The 'topic' argument, which is directly controlled by the user via command-line input, is inserted without sanitization into the LLM prompt string. An attacker can craft a malicious 'topic' to manipulate the underlying LLM's behavior, potentially leading to unintended content generation, data exposure (if the LLM has access to sensitive context), or other prompt injection attacks. Implement robust input sanitization or use a templating engine that automatically escapes user input before constructing LLM prompts. Consider using a structured prompt approach (e.g., JSON-based prompts) or a dedicated prompt engineering library that separates user input from system instructions. | LLM | youtube_factory.py:150 | |
| CRITICAL | User input directly embedded in LLM prompt The 'topic' argument, which is directly controlled by the user via command-line input, is inserted without sanitization into the LLM prompt string. An attacker can craft a malicious 'topic' to manipulate the underlying LLM's behavior, potentially leading to unintended content generation, data exposure (if the LLM has access to sensitive context), or other prompt injection attacks. Implement robust input sanitization or use a templating engine that automatically escapes user input before constructing LLM prompts. Consider using a structured prompt approach (e.g., JSON-based prompts) or a dedicated prompt engineering library that separates user input from system instructions. | LLM | youtube_factory.py:300 | |
| HIGH | User-controlled argument passed to external command without sanitization The 'voice' argument is directly controlled by the user via the '--voice' command-line option. This argument is then passed to the 'text_to_speech' function (from 'video_utils'), which is expected to invoke the 'edge-tts' command-line tool. If 'text_to_speech' constructs a shell command string using this 'voice' argument without proper escaping or uses 'shell=True' with an unsanitized string, an attacker could inject arbitrary shell commands by providing a malicious 'voice' value (e.g., 'en-US-ChristopherNeural --output-file /tmp/evil.mp3; rm -rf /'). Ensure that all arguments passed to 'subprocess' calls are properly escaped for the target shell, or preferably, pass arguments as a list to 'subprocess.run()' and avoid 'shell=True'. Validate 'voice' against a whitelist of allowed values. | LLM | youtube_factory.py:175 | |
| HIGH | LLM-generated content passed to external command without sanitization The 'scene["text"]' content, which is generated by the LLM based on user input, is passed to the 'text_to_speech' function. If the LLM is manipulated (via prompt injection or other means) to generate text containing shell metacharacters or 'edge-tts' specific command injection syntax, and 'text_to_speech' executes this content via 'subprocess.run(..., shell=True)' or an unsanitized command string, it could lead to arbitrary command execution. Sanitize or escape all LLM-generated text before passing it to external command-line tools. If possible, use libraries that provide safe APIs for interacting with TTS engines instead of direct 'subprocess' calls. | LLM | youtube_factory.py:175 | |
| HIGH | LLM-generated script content passed to FFmpeg without sanitization The 'add_captions' function (from 'video_utils') takes 'script_path' as an argument. This 'script_path' points to a file ('script.md') containing LLM-generated text. It is highly probable that 'add_captions' uses 'ffmpeg' to burn these captions into the video. 'ffmpeg' has a powerful and complex filter graph syntax. If the LLM-generated script text contains 'ffmpeg' filter graph injection payloads or shell metacharacters, and this text is not properly sanitized or escaped before being passed to 'ffmpeg' via 'subprocess', an attacker could achieve command injection. Thoroughly sanitize and escape all LLM-generated text before passing it as arguments or input to 'ffmpeg' commands. Prefer using 'ffmpeg' libraries or carefully constructed argument lists to 'subprocess.run()' over 'shell=True' or direct string concatenation. | LLM | youtube_factory.py:190 |
Scan History
Embed Code
[](https://skillshield.io/report/49f4b700f2860bbd)
Powered by SkillShield