Trust Assessment
video-ad-deconstructor received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Unsanitized user-controlled data interpolated into LLM prompts.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsanitized user-controlled data interpolated into LLM prompts The skill constructs prompts for the Gemini LLM by directly interpolating potentially untrusted data from `ExtractedVideoContent` objects (e.g., `transcript`, `scenes`, `text_overlays`) and the `summary` generated by the LLM itself. This occurs in `AdDeconstructor.generate_summary` (line 89) where `context` is built from these inputs and used in an f-string, and in `AdDeconstructor.deconstruct` (line 130, truncated) where `extracted_content` and `summary` are passed as variables to `PromptManager.get_prompt`. The `PromptManager._substitute_variables` function (scripts/prompt_manager.py, line 110) performs direct string replacement without sanitization, making it vulnerable to malicious content in the `variables` dictionary. An attacker could craft malicious input within the video content (e.g., in the transcript or text overlays) to manipulate the LLM's behavior, leading to prompt injection. Implement robust input sanitization or use templating libraries that offer safer variable substitution (e.g., by escaping special characters or using structured data for LLM input where possible). For LLM prompts, consider using techniques like input validation, output parsing, and separating user input from system instructions. If direct interpolation is necessary, ensure all user-controlled variables are thoroughly sanitized to remove any potential prompt injection attempts before being included in the prompt. Specifically, sanitize `transcript`, `scenes`, `text_overlays`, and the `summary` before they are used in prompt construction or passed to `PromptManager` for substitution. | LLM | scripts/deconstructor.py:89 |
Scan History
Embed Code
[](https://skillshield.io/report/25d9d8ae1fb7cb81)
Powered by SkillShield