Trust Assessment
discord-hub-my received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 2 critical, 1 high, 0 medium, and 0 low severity. Key findings include Command Injection via shell expansion of user-controlled message, Arbitrary command execution via sourcing untrusted `.env` file, Data exfiltration risk via attacker-controlled `WEBHOOK_URL`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via shell expansion of user-controlled message The `MSG` variable is assigned from user input (`$1`) using shell parameter expansion. If `$1` contains command substitutions (e.g., `$(command)` or backticks), the embedded command will be executed by the shell during variable assignment. The output of this command will then be included in the JSON payload sent via `curl`, indicating successful command injection. Sanitize or escape user input (`$1`) before assigning it to `MSG` to prevent shell command execution. A safer approach for shell scripts is to avoid direct interpolation of untrusted input into commands that perform shell expansion. If the message must contain special characters, ensure they are properly escaped for the shell and then for JSON. | LLM | discord_send.sh:9 | |
| CRITICAL | Arbitrary command execution via sourcing untrusted `.env` file The `run.sh` script sources a `.env` file from its directory (`. "$DIR/.env"`). If an attacker can control the content of this `.env` file, they can inject and execute arbitrary shell commands with the permissions of the script. This is a severe command injection vulnerability. Avoid sourcing `.env` files from potentially untrusted locations. If environment variables must be loaded, use a secure method that parses key-value pairs without executing arbitrary code. Ensure the `.env` file is protected with strict file permissions and its content is trusted and validated. | LLM | run.sh:9 | |
| HIGH | Data exfiltration risk via attacker-controlled `WEBHOOK_URL` The `WEBHOOK_URL` variable, which determines the destination for messages, is loaded from the `.env` file. Due to the command injection vulnerability associated with sourcing the `.env` file, an attacker could manipulate this file to set `WEBHOOK_URL` to an arbitrary, attacker-controlled server. This would cause any messages sent by the bot to be exfiltrated to the attacker's server. Ensure that critical configuration variables like `WEBHOOK_URL` are loaded from trusted sources and are not susceptible to manipulation by untrusted input. Implement strict access controls on `.env` files and validate the `WEBHOOK_URL` against a whitelist of allowed domains if possible. | LLM | run.sh:9 |
Scan History
Embed Code
[](https://skillshield.io/report/8d80857df9962b8b)
Powered by SkillShield