Trust Assessment
moltbook received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via User Input to Shell Script, Unpinned External Dependency and Lack of Integrity Verification.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User Input to Shell Script The skill instructs the agent to invoke a local shell script (`./scripts/moltbook.sh`) with user-provided arguments for operations like replying to or creating posts. For example, `moltbook.sh reply <post_id> "Your reply here"` and `moltbook.sh create "Post Title" "Post content"`. If the `moltbook.sh` script does not properly sanitize these user-controlled inputs before passing them to underlying shell commands, an attacker could inject arbitrary shell commands by crafting malicious input for `<post_id>`, "Your reply here", "Post Title", or "Post content". This could lead to unauthorized code execution within the agent's environment. Review the source code of `scripts/moltbook.sh` to ensure all user-provided arguments are rigorously sanitized and properly escaped before being used in any shell commands. Prefer using safe argument passing mechanisms or libraries that handle escaping automatically. Consider using a more robust method for inter-process communication than direct shell command execution with concatenated strings. | LLM | SKILL.md:33 | |
| MEDIUM | Unpinned External Dependency and Lack of Integrity Verification The skill requires the installation of `OpenClawCLI` from an external URL (`https://openclawcli.vercel.app/`). The instructions do not specify a particular version of the CLI tool, which means future installations could fetch a different or potentially malicious version (unpinned dependency). Furthermore, there are no checksums or digital signatures provided to verify the integrity and authenticity of the downloaded binary. This introduces a significant supply chain risk, as a compromise of the hosting platform or the source project could lead to the distribution of malicious software to agents using this skill. Pin the exact version of `OpenClawCLI` required for the skill to function correctly. Provide cryptographic hashes (e.g., SHA256) for the expected binaries to allow for integrity verification. Ideally, use a trusted package manager or a more secure distribution mechanism that includes built-in integrity checks and versioning. | LLM | SKILL.md:5 |
Scan History
Embed Code
[](https://skillshield.io/report/66fbb095ed108aea)
Powered by SkillShield