Trust Assessment
diarybeast received a trust score of 62/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 1 medium, and 1 low severity. Key findings include Node lockfile missing, Command Injection via 'exec' tool, Excessive Permissions: 'exec' tool requested.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via 'exec' tool The skill explicitly requests the 'exec' tool permission and instructs the user (and by extension, the LLM) to run shell commands like `node setup.mjs` and various `curl` commands. This allows for arbitrary command execution on the host system if the LLM is prompted to inject malicious commands or arguments into these execution paths. An attacker could craft a prompt to execute arbitrary code on the system where the agent is running. Re-evaluate the necessity of the 'exec' tool. If essential, implement strict input validation and sanitization for all arguments passed to 'exec' to prevent injection. Consider using more constrained tools or APIs instead of raw shell execution. Ensure that any user-provided input is never directly interpolated into shell commands. | LLM | SKILL.md:17 | |
| HIGH | Excessive Permissions: 'exec' tool requested The skill's manifest explicitly declares a dependency on the 'exec' tool. This grants the skill the ability to execute arbitrary shell commands on the host system, which is a highly privileged operation. This significantly increases the attack surface for command injection, data exfiltration, or system compromise if the skill's execution context is compromised. Restrict tool access to the absolute minimum required. If 'exec' is necessary, ensure all commands executed through it are strictly controlled, whitelisted, and their arguments are thoroughly sanitized. Consider if a more specific, less powerful tool could achieve the same functionality without granting full shell access. | LLM | package.json:6 | |
| MEDIUM | Data Exfiltration to Third-Party API The `setup.mjs` script and the `curl` commands in `SKILL.md` are designed to send user-specific data (generated `address`, `signature`, `nonce`, `TOKEN`, `userAddress`, `encryptedContent` for diary entries, and feedback messages) to an external third-party API (`https://dapp.diarybeast.xyz`). While this is the intended functionality of the skill, it means user data, including potentially sensitive diary entries, is transmitted to a third-party service. This poses a risk if that service is compromised, acts maliciously, or if the LLM is prompted to insert sensitive information into fields like `encryptedContent`. Ensure robust encryption and privacy policies are in place for the external service. Clearly communicate to users what data is collected and how it is used. For the LLM, add explicit warnings or sanitization steps before populating sensitive fields like `encryptedContent` to prevent accidental leakage of LLM context or user PII. | LLM | SKILL.md:38 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/dxdleady/openclaw-skill/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/18607ff8151ace2a)
Powered by SkillShield