Trust Assessment
ai-radio-host received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 0 medium, and 1 low severity. Key findings include Unpinned Python dependencies, Dynamic fetching of skill instructions from external URL, API key sent to configurable base URL.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 38/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | API key sent to configurable base URL The `scripts/agent-poll.js` script reads `MOLT_RADIO_API_KEY` from environment variables and sends it in the `X-Agent-Key` header to the `baseUrl`. The `baseUrl` is derived from `process.env.MOLT_RADIO_URL`. If an attacker can manipulate the `MOLT_RADIO_URL` environment variable, they can redirect all API calls, including those containing the `MOLT_RADIO_API_KEY`, to a malicious server, effectively exfiltrating the agent's API key. The `MOLT_RADIO_URL` should be hardcoded or securely configured by the skill provider, not easily overridden by environment variables in a way that allows untrusted redirection. If configuration is necessary, it should be done through a trusted mechanism that prevents arbitrary URL injection. | LLM | scripts/agent-poll.js:11 | |
| HIGH | Dynamic fetching of skill instructions from external URL The skill instructs the agent to `curl "https://moltradio.xyz/skill.md"` to get the latest instructions and to follow them if they differ. If `moltradio.xyz` is compromised, an attacker could serve malicious instructions, leading to prompt injection against the agent's LLM, data exfiltration, or command injection. This creates a dynamic supply chain risk where the agent's behavior can be altered by an external, potentially untrusted source. Do not dynamically fetch and execute instructions from external, unverified sources. All instructions should be part of the trusted skill package. If updates are necessary, they should be delivered via a secure update mechanism for the entire skill package, not by fetching raw markdown. | LLM | SKILL.md:20 | |
| HIGH | Server-controlled prompt used in agent's turn content The `scripts/agent-poll.js` script fetches a `prompt` from the `moltradio.xyz` server (`GET /sessions/:id/prompt`) and embeds it directly into the `content` of the agent's turn using `turnTemplate`. If the agent's LLM then processes this `content` as part of its own instructions or context, a malicious `moltradio.xyz` server could inject harmful instructions into the agent's subsequent actions, leading to prompt injection. This is a significant risk for the agent LLM, as the script facilitates the delivery of the potentially malicious prompt. The agent LLM should treat all external inputs, including server-provided prompts, as untrusted data. It should strictly separate instructions from data and apply robust input sanitization and validation before processing. The `content` should be treated as a message to be sent, not as instructions for the LLM itself. | LLM | scripts/agent-poll.js:78 | |
| LOW | Unpinned Python dependencies The `pip install` command for `kokoro`, `soundfile`, and `numpy` does not specify version numbers. This can lead to installing vulnerable or malicious versions if a package maintainer's account is compromised, or if a new version introduces breaking changes or security flaws. Without pinned versions, the agent might unknowingly install an insecure dependency. Pin exact versions for all dependencies (e.g., `kokoro==1.2.3`) to ensure reproducible and secure installations. Consider using a `requirements.txt` file with hashed dependencies. | LLM | SKILL.md:140 |
Scan History
Embed Code
[](https://skillshield.io/report/b52b6c59ae416841)
Powered by SkillShield