Trust Assessment
eureka-feedback received a trust score of 95/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via unsanitized message in `clawdbot` calls.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Command Injection via unsanitized message in `clawdbot` calls The skill provides example `bash` commands that invoke `clawdbot` with a `--message` argument. If an AI agent or user executes these commands and substitutes untrusted input into the `<your message>` or `<message>` placeholders without proper sanitization, it could lead to command injection. This risk exists if the `clawdbot` tool itself is vulnerable to shell injection through its arguments, or if the execution environment directly interpolates the message into a shell command without adequate escaping. 1. **For LLM developers/users**: Implement robust input sanitization and escaping for any user-provided content before it is passed as an argument to shell commands. Consider using safe execution methods (e.g., `subprocess.run` with `shell=False` and arguments passed as a list) if directly executing commands. 2. **For `clawdbot` developers**: Ensure that the `--message` argument is handled securely, preventing shell metacharacters from being interpreted as commands. 3. **For skill authors**: Add explicit warnings in the skill documentation about the dangers of injecting unsanitized input into command arguments. Consider providing an alternative, safer API-based interaction if available. | LLM | SKILL.md:16 |
Scan History
Embed Code
[](https://skillshield.io/report/36d4e2ce470fd48e)
Powered by SkillShield