Trust Assessment
solobuddy received a trust score of 50/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 2 critical, 1 high, 1 medium, and 1 low severity. Key findings include Sensitive path access: AI agent config, Unsanitized `dataPath` leads to arbitrary command execution, Unsanitized `<name>` placeholder allows command injection and path traversal.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 31/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Unsanitized `dataPath` leads to arbitrary command execution The `solobuddy.dataPath` configuration variable is directly interpolated into multiple shell commands (e.g., `cat`, `echo`, `tail`, `ls`, `cd`, `git`) without apparent sanitization. An attacker who can control this configuration value (e.g., by modifying `~/.clawdbot/clawdbot.json` or by tricking the user into setting a malicious path) can inject arbitrary shell commands. This allows for full system compromise, including data exfiltration, modification, or deletion. All variables interpolated into shell commands must be properly sanitized or escaped. For file paths, ensure they are canonicalized and restricted to expected directories. Consider using a safer API for file operations instead of direct shell execution where possible. If shell execution is necessary, use `shlex.quote()` or equivalent for each variable. | LLM | SKILL.md:34 | |
| CRITICAL | Unsanitized `<name>` placeholder allows command injection and path traversal The `<name>` placeholder, which is expected to be user-provided (e.g., for draft names or soul names), is directly interpolated into shell commands without sanitization. This allows an attacker to inject arbitrary shell commands or perform path traversal to read/write files outside the intended `{dataPath}` directory. For example, an attacker could provide `../../../../etc/passwd` as a draft name to read system files, or `foo; rm -rf /;` to execute arbitrary commands. All user-provided input, especially filenames or paths, must be strictly validated and sanitized before being used in shell commands. Implement a whitelist of allowed characters for filenames and prevent directory traversal sequences (e.g., `../`). Use `shlex.quote()` or equivalent for each variable if shell execution is unavoidable. | LLM | SKILL.md:58 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/humanji7/solobuddy/SKILL.md:8 | |
| MEDIUM | Skill requires powerful CLIs (`gh`, `bird`) and uses broad system commands The skill manifest declares dependencies on `gh` (GitHub CLI) and `bird` CLIs, and the skill itself uses commands like `git`, `cat`, `echo`, `mkdir`, `touch`, `ls`, `tail`, `cd`. While these tools are functional, their broad capabilities, when combined with the identified command injection vulnerabilities (Findings 1 & 2), significantly amplify the potential impact of an exploit, allowing for actions like repository manipulation, arbitrary file system access, and potentially network requests. Review the necessity of each required CLI tool and system command. If possible, use more constrained APIs or sandboxed environments. Ensure that any interaction with these powerful tools is done with thoroughly sanitized inputs. | LLM | SKILL.md:1 | |
| LOW | `custom` voice profile can load arbitrary content into LLM prompt The `solobuddy.voice` configuration allows for a `custom` profile, which loads content from `{dataPath}/voice.md`. If an attacker can control the content of `voice.md` (e.g., by exploiting a command injection vulnerability to write to it), they could inject malicious instructions or data into the LLM's prompt, potentially manipulating its behavior or extracting sensitive information. This is an indirect prompt injection vector, relying on a prior exploit. Ensure that files loaded for LLM prompting are from trusted sources and cannot be modified by untrusted input. If user-controlled content must be used, it should be treated as data, not instructions, and passed to the LLM in a way that prevents prompt injection (e.g., as part of a user message, not system instructions). | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/ab37daa2d48569de)
Powered by SkillShield