Trust Assessment
fliz-ai-video-generator received a trust score of 43/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 8 findings: 0 critical, 1 high, 7 medium, and 0 low severity. Key findings include Suspicious import: requests, Potential Server-Side Request Forgery (SSRF) via webhook_url, Shell Command Injection in `curl_examples.sh` polling logic.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 65/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings8
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Shell Command Injection in `curl_examples.sh` polling logic The `poll_video` function in `assets/examples/curl_examples.sh` uses shell command substitution (`$()`) to parse the `step` and `url` fields from the `curl` response. If the Fliz API response (which is external data) contains malicious characters (e.g., backticks, semicolons, newlines) within the `step` or `url` values, it could lead to arbitrary command execution in the shell environment where this example script is run. This is a classic shell injection vulnerability. When parsing external data in shell scripts, avoid direct command substitution on potentially untrusted strings. Use safer parsing methods or ensure that the input is strictly sanitized. For robust parsing, consider using tools like `jq` or a more capable scripting language (like Python) that handles JSON safely. | LLM | assets/examples/curl_examples.sh:160 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/jb-fliz/fliz-ai-video-generator/assets/examples/python_client.py:30 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/jb-fliz/fliz-ai-video-generator/scripts/create_video.py:15 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/jb-fliz/fliz-ai-video-generator/scripts/list_resources.py:16 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/jb-fliz/fliz-ai-video-generator/scripts/poll_status.py:14 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/jb-fliz/fliz-ai-video-generator/scripts/test_connection.py:13 | |
| MEDIUM | Potential Server-Side Request Forgery (SSRF) via webhook_url The `create_video` functions in the Python and Node.js clients, as well as the cURL examples, accept a `webhook_url` parameter. If an AI agent allows untrusted user input to populate this parameter, it could be coerced into making the Fliz API send a request to an arbitrary internal or external URL. This could be used to scan internal networks, access sensitive internal services, or trigger actions on external systems. AI agents should strictly validate and sanitize any user-provided `webhook_url` before passing it to the skill. Consider whitelisting allowed domains or using a proxy to prevent requests to internal networks or sensitive external endpoints. | LLM | scripts/create_video.py:49 | |
| MEDIUM | Local File Content Exfiltration via `--file` argument The `scripts/create_video.py` script allows the video `description` to be read from a local file specified by the `--file` argument. If an AI agent is prompted to use this skill and allows untrusted user input for the `--file` argument, it could be coerced into reading arbitrary local files (e.g., `/etc/passwd`, configuration files, API keys) and sending their content to the Fliz API as part of the video description. This constitutes a data exfiltration risk. AI agents should strictly validate and sanitize any user-provided file paths. If file access is necessary, restrict it to a designated safe directory or use a virtualized environment. Avoid allowing arbitrary file paths from untrusted input. | LLM | scripts/create_video.py:76 |
Scan History
Embed Code
[](https://skillshield.io/report/41116efcc3f0a653)
Powered by SkillShield