Trust Assessment
codecast received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 11 findings: 4 critical, 6 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, File read + network send exfiltration, Dangerous call: subprocess.run().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings11
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/allanjeng/codecast/scripts/platforms/discord.py:23 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/allanjeng/codecast/scripts/platforms/discord.py:85 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/allanjeng/codecast/SKILL.md:52 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/allanjeng/codecast/scripts/dev-relay.sh:20 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function '_curl_post'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/allanjeng/codecast/scripts/platforms/discord.py:23 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'post'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/allanjeng/codecast/scripts/platforms/discord.py:85 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/allanjeng/codecast/SKILL.md:52 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/allanjeng/codecast/scripts/dev-relay.sh:20 | |
| HIGH | Recommended Claude Code settings grant full system access The `SKILL.md` documentation recommends configuring Claude Code with `defaultMode: "bypassPermissions"` and `allow: ["*"]` in `~/.claude/settings.json`. This configuration grants the Claude Code agent unrestricted access to the host system's filesystem and other capabilities, allowing it to execute arbitrary commands, read/write any files, and perform any action. If the agent is compromised or misused, this setting enables severe system-level damage or data exfiltration. Advise users against using `bypassPermissions` and `allow: ["*"]` in production or sensitive environments. Recommend using a more restrictive permission model (e.g., `askPermissions` or specific `allow` rules) and running the agent in a sandboxed environment (e.g., Docker, VM) with minimal necessary privileges. | LLM | SKILL.md:48 | |
| HIGH | Agent output, including sensitive data, is streamed to Discord The core functionality of the `codecast` skill is to stream all agent output, including file contents (read/write), bash command inputs, and their outputs, to a Discord channel via webhook. While this is the intended purpose for transparency, it inherently poses a high risk of data exfiltration. If the agent is instructed (maliciously or accidentally) to read sensitive files (e.g., API keys, configuration files, SSH keys, `/etc/passwd`) or execute commands that reveal sensitive system information, this data will be posted directly to the configured Discord channel. The `truncate` function in `parse-stream.py` limits the length of some outputs but does not prevent targeted exfiltration of smaller sensitive snippets. Users must be fully aware that all agent interactions and outputs, including potentially sensitive data, will be publicly visible in the Discord channel. Emphasize that this skill should only be used with non-sensitive projects or in highly controlled, private environments. Implement stricter content filtering or redaction mechanisms within `parse-stream.py` for known sensitive patterns if possible, or provide clear warnings to the user about the data exposure risk. | LLM | scripts/parse-stream.py:190 | |
| MEDIUM | Discord webhook URL and bot token exposed via environment variables The Discord webhook URL (`WEBHOOK_URL`) and an optional bot token (`BOT_TOKEN`) are read from files (`.webhook-url`, `.bot-token`) and then exported as environment variables. These environment variables are accessible to the `parse-stream.py` script and its child processes (e.g., `curl` commands in `discord.py`). While the source files are protected with `chmod 600`, any process that can inspect its own or its parent's environment variables, or any vulnerability in the Python script that allows arbitrary code execution, could potentially exfiltrate these credentials. Given the 'Excessive Permissions' finding for the agent, a compromised agent running as the same user could directly read the `.webhook-url` and `.bot-token` files, making the environment variable exposure a secondary but still relevant concern. Consider alternative, more secure methods for passing credentials to the Python script, such as using a secure secrets management system or passing them via stdin/pipes where environment variable exposure is minimized. Ensure that the agent itself is run in a highly restricted environment that cannot access parent process environment variables or sensitive files. | LLM | scripts/dev-relay.sh:70 |
Scan History
Embed Code
[](https://skillshield.io/report/40b84090ff2e0138)
Powered by SkillShield