Trust Assessment
eyebot-socialbot received a trust score of 62/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 0 medium, and 1 low severity. Key findings include Node lockfile missing, Remote Command Injection via 'mode:exec' payload, Sensitive data exfiltration to configurable external API.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Remote Command Injection via 'mode:exec' payload The `scripts/socialbot.sh` script constructs a JSON payload `{"request":"%s","mode":"exec"}` where the `request` field is directly populated by user-supplied arguments (`$*`). This payload is then sent via `curl` to an external API endpoint defined by the `EYEBOT_API` environment variable. The presence of `"mode":"exec"` strongly indicates that the content of the `request` field is intended for execution on the remote server. An attacker could craft malicious input to be passed as arguments to the skill, leading to arbitrary command execution on the remote server if the `EYEBOT_API` endpoint does not properly sanitize or validate the `request` field before execution. The remote API endpoint must strictly validate and sanitize the `request` field, or avoid executing arbitrary commands based on untrusted user input. If remote execution is intended, implement a whitelist of allowed commands and arguments, or use a safer execution mechanism that does not directly interpret user-provided strings as commands. The `mode:exec` pattern is inherently dangerous when combined with untrusted input. | LLM | scripts/socialbot.sh:14 | |
| HIGH | Sensitive data exfiltration to configurable external API The script `scripts/socialbot.sh` sends all user-provided arguments, which can include sensitive information (e.g., API keys, tokens, or private messages as suggested by the `SKILL.md`'s `$TOKEN` example), to an external API endpoint specified by the `EYEBOT_API` environment variable. The `EYEBOT_API` variable is read without validation, allowing it to be redirected to arbitrary endpoints. If this environment variable is compromised or points to a malicious server, all data passed to the skill can be exfiltrated to an attacker-controlled destination. 1. Restrict the `EYEBOT_API` environment variable to a whitelist of trusted domains. 2. Implement strict input validation and sanitization for all arguments before they are included in the payload. 3. Avoid passing sensitive information directly as arguments to external APIs. If sensitive data must be sent, ensure it is encrypted end-to-end and only sent to trusted, verified endpoints. 4. Consider using a more secure method for configuration than environment variables that can be easily overridden. | LLM | scripts/socialbot.sh:15 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/eyebots/eyebot-socialbot/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/ea4a0d15f56ff9b1)
Powered by SkillShield