Trust Assessment
ai-music-generation received a trust score of 25/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, Remote code execution: curl/wget pipe to shell, Command Injection via user-controlled prompt in `infsh` command.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Remote code download piped to interpreter Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/okaris/ai-music-generation/SKILL.md:10 | |
| CRITICAL | Remote code execution: curl/wget pipe to shell Detected a pattern that downloads and immediately executes remote code. This is a primary malware delivery vector. Never pipe curl/wget output directly to a shell interpreter. | Static | skills/okaris/ai-music-generation/SKILL.md:10 | |
| HIGH | Command Injection via user-controlled prompt in `infsh` command The skill constructs `infsh app run` commands where the `--input` argument contains a user-controlled `prompt` field. The example shows the entire JSON object enclosed in single quotes. However, if the LLM does not perfectly escape user-provided input when constructing the shell command, an attacker could inject arbitrary shell commands. For example, a crafted prompt like `track"}' && ls -la && echo '{"prompt": "` could break out of the JSON string and execute `ls -la` on the host system. This is a common vulnerability when user input is directly embedded into shell command strings without robust escaping. Ensure all user-provided input for the `prompt` field is strictly validated and properly escaped for shell execution. The LLM should be explicitly instructed on how to safely escape user input for shell commands, or a safer method for passing structured data to `infsh` should be used (e.g., writing to a temporary file and passing the filename, if `infsh` supports it, or using a dedicated API if available). | LLM | SKILL.md:20 | |
| MEDIUM | Broad `Bash(infsh *)` permission allows execution of any `infsh` subcommand The skill declares `Bash(infsh *)` permission, allowing the agent to execute any command starting with `infsh`. While the provided examples (`app run`, `app sample`, `app list`, `login`) are benign, the full capabilities of the `infsh` CLI are unknown. If `infsh` includes subcommands for arbitrary file system access (e.g., `infsh upload-file /etc/passwd`), network requests, or direct shell execution (e.g., `infsh exec "arbitrary command"`), this broad permission could be abused for data exfiltration, command injection, or other malicious activities. This permission is broader than strictly necessary if only specific `infsh` subcommands are required for the skill's functionality. Restrict the `Bash` permission to only the specific `infsh` subcommands and arguments required for the skill's functionality (e.g., `Bash(infsh app run)`, `Bash(infsh app sample)`), rather than `Bash(infsh *)`. This adheres to the principle of least privilege and reduces the attack surface. | LLM | Manifest / Declared permissions |
Scan History
Embed Code
[](https://skillshield.io/report/440ee33781040420)
Powered by SkillShield