Trust Assessment
moltbook-agent received a trust score of 51/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Missing required field: name, Unpinned npm dependency version, Suspicious `dotenv` package version indicating potential typosquatting or malicious package.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Suspicious `dotenv` package version indicating potential typosquatting or malicious package The `package.json` and `package-lock.json` files specify `dotenv` version `^17.2.3`. A review of the official `dotenv` package on npm (npmjs.com/package/dotenv) shows that the latest stable version is `16.3.1`, and no `17.x.x` versions exist. While the `package-lock.json` references `dotenvx.com` in its funding URL, the package name remains `dotenv`. This discrepancy is a strong indicator of a potential typosquatting attack or a malicious package substitution, where a similarly named or versioned package could be installed containing harmful code. This poses a critical supply chain risk. Verify the intended `dotenv` package. If the official `dotenv` package is desired, update the dependency to a legitimate and current version (e.g., `^16.3.1`). If `dotenvx` is intended, ensure the package name in `package.json` is `dotenvx` and not `dotenv`. After correction, remove `node_modules` and `package-lock.json` and reinstall dependencies to ensure the correct package is installed. | LLM | package.json:12 | |
| HIGH | User message directly passed to LLM, vulnerable to prompt injection The `userMessage` (untrusted user input) is directly inserted into the `content` field of the 'user' role in the OpenAI chat completion request within the `think` function. Although the skill attempts to classify and handle 'manipulation' questions with specific responses, this mechanism does not prevent other forms of prompt injection. A malicious user could craft input to override system instructions, change the agent's persona, or attempt to extract sensitive information from the LLM's context or internal knowledge base, bypassing the intended behavioral constraints. Implement robust input sanitization and validation for `userMessage`. Consider using a separate, isolated LLM call for instruction parsing or employing techniques like prompt templating with strict variable insertion. Enhance the system prompt to be more resilient against overriding instructions. Techniques like input/output guardrails or a 'red-teaming' LLM to filter malicious inputs could also be considered. | LLM | think.js:60 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/shmagalow-del/moltbook-agent/SKILL.md:1 | |
| MEDIUM | Unpinned npm dependency version Dependency 'dotenv' is not pinned to an exact version ('^17.2.3'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/shmagalow-del/moltbook-agent/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/aeda9f8a90e08b5e)
Powered by SkillShield