Trust Assessment
ai-podcast-creation received a trust score of 11/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 2 critical, 2 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, Remote code execution: curl/wget pipe to shell, Excessive Bash Permissions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Remote code download piped to interpreter Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/okaris/ai-podcast-creation/SKILL.md:9 | |
| CRITICAL | Remote code execution: curl/wget pipe to shell Detected a pattern that downloads and immediately executes remote code. This is a primary malware delivery vector. Never pipe curl/wget output directly to a shell interpreter. | Static | skills/okaris/ai-podcast-creation/SKILL.md:9 | |
| HIGH | Excessive Bash Permissions The skill declares `Bash(infsh *)` as an allowed tool. This grants the skill permission to execute any command starting with `infsh`. While the examples show legitimate uses, this broad wildcard permission allows for potential abuse if the agent is tricked into generating malicious `infsh` commands, leading to arbitrary command execution within the `infsh` ecosystem. Restrict Bash permissions to specific `infsh` subcommands and arguments (e.g., `Bash(infsh app run infsh/kokoro-tts)`, `Bash(infsh app run infsh/media-merger)`), rather than a broad wildcard. Implement strict input validation and sanitization for any user-controlled data passed to `infsh` commands. | LLM | SKILL.md:1 | |
| HIGH | Potential Command Injection via `infsh` Input The skill demonstrates passing user-controlled content (e.g., `<host-lines>`, `<guest-lines>`, `<your-document-content>`, `[YOUR TOPIC]`) directly into `infsh app run` commands as JSON input. Given the broad `Bash(infsh *)` permission, if the `infsh` CLI or the underlying applications it calls (like `kokoro-tts`, `ai-music`, `media-merger`) are vulnerable to injection through malformed JSON or specially crafted string inputs, this could lead to arbitrary command execution. For example, if a user's input for `text` or `prompt` contains characters that break out of the JSON string or are interpreted as commands by the underlying system, it could be exploited. Implement robust input validation and sanitization for all user-provided data before it is embedded into JSON inputs for `infsh` commands. Ensure that special characters are properly escaped or encoded to prevent JSON parsing vulnerabilities or unintended command execution by the `infsh` tool or its underlying applications. Ideally, use a structured data passing mechanism instead of raw string interpolation into JSON. | LLM | SKILL.md:100 | |
| MEDIUM | Prompt Injection Risk for Downstream LLM The skill's examples show how the agent constructs prompts for `openrouter/claude-sonnet-45` using user-provided content (e.g., `[YOUR TOPIC]`, `<your-document-content>`). If the host LLM directly inserts untrusted user input into these prompts without proper sanitization or validation, a malicious user could inject instructions to manipulate the behavior of the `claude-sonnet-45` model, potentially leading to unintended content generation, data disclosure, or other undesirable outcomes. Implement strict input validation and sanitization for all user-provided content that will be used to construct prompts for downstream LLMs. Consider using techniques like prompt templating with strict variable substitution, or passing user input as separate context rather than directly embedding it into the main prompt string, to mitigate prompt injection risks. | LLM | SKILL.md:100 |
Scan History
Embed Code
[](https://skillshield.io/report/5dc0e2b55af5e0ab)
Powered by SkillShield