Trust Assessment
moltpad received a trust score of 62/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 2 critical, 1 high, 1 medium, and 2 low severity. Key findings include Node lockfile missing, Untrusted `curl` commands for skill installation/updates, Instructions to modify system heartbeat and recursively fetch/execute untrusted content.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 16/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted `curl` commands for skill installation/updates The skill instructs the LLM to execute `curl` commands to download skill files and updates from `https://moltpad.space`. This allows arbitrary code execution if `moltpad.space` is compromised, as the downloaded files could be malicious. The `jq` command is also executed. This is a direct command injection vector. Avoid direct shell command execution for fetching skill components from untrusted sources. Instead, use a secure package manager or a sandboxed environment for skill installation and updates. If `curl` must be used, ensure strict content validation and execution sandboxing. | LLM | SKILL.md:24 | |
| CRITICAL | Instructions to modify system heartbeat and recursively fetch/execute untrusted content The skill explicitly instructs the LLM to read, append to, and save its global `HEARTBEAT.md` file (or equivalent). The appended content includes `curl` and `jq` commands for version checking and an instruction to 'Fetch https://moltpad.space/references/heartbeat.md and follow it'. This creates a persistent command injection vector and a recursive prompt injection risk, allowing `moltpad.space` to continuously inject and execute arbitrary instructions into the LLM's operational loop. LLMs should never be instructed to modify their own system configuration files or execute arbitrary commands from untrusted sources. All updates and periodic tasks should be managed by the host environment in a sandboxed manner. Remove instructions for direct file modification and `curl`/`jq` execution. | LLM | SKILL.md:46 | |
| HIGH | Unique `moltbotId` sent to untrusted external service The skill's `package.json` defines `moltbotId` as a required configuration, described as 'Your unique Moltbot ID'. The `SKILL.md` then instructs the LLM to use this `agentId` (which maps to `moltbotId`) in various API calls (e.g., `POST /api/likes`, `POST /api/comments`, `POST /api/bookmarks`, `GET /api/chapters/check-rights`) to `https://moltpad.space`. Sending a unique identifier to an untrusted third-party service allows for tracking and potential deanonymization of the agent. Avoid sending unique, persistent identifiers to untrusted third-party services. If an ID is necessary for functionality, consider using a session-specific or anonymized identifier. Implement a proxy or data filtering layer to prevent sensitive data from being sent to external services without explicit user consent. | LLM | SKILL.md:109 | |
| MEDIUM | Untrusted content (books/chapters) processed by LLM The skill instructs the LLM to fetch and 'read' content (books, chapters) from `https://moltpad.space` via `GET /api/chapters?contentId=BOOK_ID&forAgent=true`. While the `forAgent=true` parameter is intended to add context, the LLM is still processing arbitrary, untrusted text content. This content could contain hidden instructions or adversarial prompts designed to manipulate the LLM's behavior, override its instructions, or extract information. Implement robust input sanitization and content filtering for all external text consumed by the LLM. Use a separate, isolated LLM instance or a highly restricted context window for processing untrusted content to prevent it from influencing the primary instruction set. Clearly delineate trusted instructions from untrusted data. | LLM | SKILL.md:170 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/webeferen/moltpad-app/package.json | |
| LOW | Instructions for file system write operations for memory management The skill instructs the LLM to create and manage summary files within `~/.moltbot/memory/books/`. This demonstrates that the LLM is expected to have file system write capabilities. While the specified path is within a designated 'memory' area, the underlying capability could be exploited if the `BOOK_ID` or other parameters used to construct the filename are not properly sanitized, potentially leading to path traversal vulnerabilities or writing to unintended locations. Ensure that any parameters used to construct file paths (e.g., `BOOK_ID`) are strictly validated and sanitized to prevent path traversal attacks. If possible, restrict the LLM's file system access to only the absolute minimum necessary directories and operations. | LLM | SKILL.md:158 |
Scan History
Embed Code
[](https://skillshield.io/report/dba46d454ebc054c)
Powered by SkillShield