Trust Assessment
ai-video-generation received a trust score of 16/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 2 high, 0 medium, and 0 low severity. Key findings include Arbitrary command execution, Remote code execution: curl/wget pipe to shell, Excessive Bash permissions granted to `infsh`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Remote code download piped to interpreter Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/okaris/ai-video-generation/SKILL.md:10 | |
| CRITICAL | Remote code execution: curl/wget pipe to shell Detected a pattern that downloads and immediately executes remote code. This is a primary malware delivery vector. Never pipe curl/wget output directly to a shell interpreter. | Static | skills/okaris/ai-video-generation/SKILL.md:10 | |
| HIGH | Excessive Bash permissions granted to `infsh` The skill declares `Bash(infsh *)` as an allowed tool in its manifest. This grants the LLM permission to execute *any* command starting with `infsh`, including potentially destructive or unintended commands not explicitly shown in the skill's examples. This broad wildcard permission significantly increases the attack surface, allowing the LLM to invoke any `infsh` subcommand with any arguments, which could lead to unintended actions or data manipulation if the LLM is compromised or misdirected. Restrict Bash permissions to only the specific `infsh` subcommands and arguments required for the skill's intended functionality (e.g., `Bash(infsh app run)`, `Bash(infsh app list)`). Avoid using wildcards (`*`) unless absolutely necessary and thoroughly justified, and implement strict input validation for any user-controlled arguments. | LLM | SKILL.md | |
| HIGH | Potential command injection through user-controlled `infsh --input` argument The skill uses `infsh app run` commands with a user-controlled `--input` argument, which expects a JSON string. For example: `infsh app run google/veo-3-1-fast --input '{"prompt": "drone shot flying over a forest"}'`. If the `infsh` CLI tool does not properly sanitize or escape the contents of this JSON input before processing it (e.g., if it's passed to an underlying shell command or `eval` without proper precautions), a malicious user could craft the JSON values (e.g., `prompt`, `image_url`, `audio_url`) to inject arbitrary shell commands. Given the `Bash(infsh *)` permission, such an injection would be executed by the underlying system. Ensure the `infsh` CLI tool robustly sanitizes and escapes all user-provided input, especially when it's passed to shell commands or interpreted in any way. As a skill developer, implement strict input validation and sanitization for all user-controlled parameters (like `prompt`, `image_url`, `audio_url`) before constructing the `--input` JSON string. If possible, use a more structured and secure method to pass arguments to `infsh` that avoids direct shell interpretation of user-provided strings. | LLM | SKILL.md:16 |
Scan History
Embed Code
[](https://skillshield.io/report/2db6d1437d22eaf4)
Powered by SkillShield