Trust Assessment
resumeclaw received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Sensitive environment variable access: $HOME, Command Injection via unsanitized file path, JSON Injection via unescaped user inputs in API payloads.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via unsanitized file path The `cmd_create` function uses `cat "$resume_file"` to read resume content. The `resume_file` variable is derived from user input. If an attacker provides a path containing shell metacharacters (e.g., `"; rm -rf /; echo "` or `$(rm -rf /)`), it can lead to arbitrary command execution on the host system where the skill is running. Sanitize or validate the `resume_file` path to ensure it does not contain shell metacharacters. A safer approach would be to use a language-specific file reading function that doesn't involve shell execution, or to strictly validate the path against a whitelist of allowed characters/patterns. If using `cat`, ensure the input path is properly quoted and does not contain command substitutions or other shell injection vectors. Consider using `readlink -f` and then checking if the path is within an allowed directory. | LLM | scripts/resumeclaw.sh:147 | |
| HIGH | JSON Injection via unescaped user inputs in API payloads Multiple commands (login, register, create, search, chat) construct JSON payloads for API requests by directly embedding user-provided values (email, password, name, query, location, message) without proper JSON escaping. An attacker can inject arbitrary JSON fields or malform the JSON structure. This could lead to various server-side vulnerabilities, including: 1) Prompt Injection if the backend uses these fields to construct LLM prompts, 2) Data Exfiltration if injected fields cause sensitive data to be returned, or 3) Command Injection if the backend processes these fields in shell commands or database queries without proper sanitization. All user-provided inputs that are embedded into JSON payloads must be properly JSON-escaped. For shell scripts, this typically involves using a tool like `jq -Rs '.'` or a Python/Node.js script to escape the string before embedding it. Apply this to `email`, `password`, `name`, `query`, `location`, and `message` variables before they are used in JSON strings. Other affected lines include 120, 164, 165, 240, and 260. | LLM | scripts/resumeclaw.sh:100 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/hherzai-crypto/resumeclaw/scripts/resumeclaw.sh:14 |
Scan History
Embed Code
[](https://skillshield.io/report/470da8a4cf75352b)
Powered by SkillShield