Trust Assessment
claw-me-maybe received a trust score of 50/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 4 high, 0 medium, and 0 low severity. Key findings include Sensitive path access: AI agent config, Potential Command Injection in `curl` parameters, Potential Command Injection in `jq` filters or shell loops.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/nickhamze/claw-me-maybe/SKILL.md:65 | |
| HIGH | Potential Command Injection in `curl` parameters The skill documentation provides `curl` examples that include parameters (e.g., `q`, `chatID`, `text`, `emoji`, `remindAt`, `assetID`, `accountID`, `participants`, `replyTo`, `upToMessageID`) that are likely to be populated by user input. If the LLM generates shell commands by directly interpolating unsanitized user input into these parameters, it could lead to command injection, allowing an attacker to execute arbitrary shell commands. This is a significant risk in the `claude_code` ecosystem where the LLM generates executable code based on these patterns. When generating shell commands, ensure all user-provided input is properly shell-escaped and URL-encoded before being interpolated into `curl` arguments, URLs, or JSON payloads. For JSON payloads, use a robust JSON library to construct the payload rather than string concatenation. | LLM | SKILL.md:169 | |
| HIGH | Potential Command Injection in `jq` filters or shell loops The skill documentation includes examples using `jq` and shell loops (`for ... in $(...)`) where parts of the command (e.g., `jq` filter expressions, variables in loops) could potentially be influenced by user input. If the LLM generates code that directly interpolates unsanitized user input into these constructs, it could lead to command injection. An attacker could craft input to execute arbitrary commands within the shell environment. When generating shell commands involving `jq` or shell loops, ensure all user-provided input is strictly validated and properly shell-escaped before being used in command arguments or variable assignments. Avoid constructing `jq` filters directly from user input. | LLM | SKILL.md:340 | |
| HIGH | Arbitrary File Write via `curl --output` with user-controlled filename The skill documentation demonstrates downloading attachments using `curl -X POST ... --output attachment.file`. If the output filename (`attachment.file`) can be influenced by user input without proper sanitization, an attacker could specify an arbitrary path and filename. This could lead to arbitrary file write on the system where the skill executes, potentially overwriting critical system files or placing malicious executables. When generating commands to download files, ensure that the output filename is either fixed to a safe, temporary location or strictly validated and sanitized to prevent path traversal and arbitrary file overwrites. Consider using a dedicated temporary directory for downloads. | LLM | SKILL.md:320 |
Scan History
Embed Code
[](https://skillshield.io/report/f5b04c88d64ee58d)
Powered by SkillShield