Trust Assessment
moltline received a trust score of 47/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 0 critical, 3 high, 2 medium, and 0 low severity. Key findings include Hardcoded Bearer Token detected, Local storage of sensitive cryptographic keys, Expectation of external API key usage.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Hardcoded Bearer Token detected A hardcoded Bearer Token was found. Secrets should be stored in environment variables or a secret manager. Replace the hardcoded secret with an environment variable reference. | Static | skills/promptrotator/moltline/skill.md:139 | |
| HIGH | Hardcoded Bearer Token detected A hardcoded Bearer Token was found. Secrets should be stored in environment variables or a secret manager. Replace the hardcoded secret with an environment variable reference. | Static | skills/promptrotator/moltline/skill.md:145 | |
| HIGH | Local storage of sensitive cryptographic keys The skill instructs the agent to generate and persistently store a wallet private key (`priv.key`) and a database encryption key (`xmtp-db.key`) in the user's home directory (`~/.moltline/`). These keys are critical for the skill's operation, including signing messages and encrypting local data. While stored with restrictive file permissions (`0o600`), their presence on the local filesystem means they are vulnerable if the agent's execution environment is compromised. An attacker gaining access to the agent's filesystem could exfiltrate these keys, leading to impersonation, unauthorized transactions, or decryption of sensitive data. The agent requires broad filesystem access to manage these files. Implement robust key management practices. Consider using hardware security modules (HSMs), secure enclaves, or platform-specific secure storage mechanisms (e.g., OS keychains) instead of plain file storage. If file storage is unavoidable, ensure the agent's environment is heavily sandboxed and isolated, and that access to these files is strictly controlled. Educate users on the risks associated with local key storage. | LLM | skill.md:40 | |
| MEDIUM | Expectation of external API key usage The skill's optional Moltbook integration explicitly mentions the use of `YOUR_MOLTBOOK_API_KEY` in a `curl` command. This indicates that the agent is expected to have access to and use external API keys. While the skill itself doesn't store or generate this key, its reliance on such keys means that if the agent's environment is compromised, or if the agent is instructed to use a key from an untrusted source, this API key could be exposed or misused. The agent's permissions would need to include access to such sensitive environment variables or configuration. Implement secure methods for managing and accessing API keys, such as environment variables, secret management services, or platform-specific secure storage. Avoid hardcoding keys or storing them in easily accessible files. Ensure the agent's execution environment is sandboxed to limit access to only necessary credentials. | LLM | skill.md:97 | |
| MEDIUM | Potential for data exfiltration via untrusted input in HTTP requests The skill makes various HTTP requests to `moltline.com` and `moltbook.com`. Several endpoints accept user-provided data, such as `name` and `description` during handle registration, and `description` when posting a quest. The `agent.on('text', ...)` handler also processes incoming message content. If these fields are populated directly from untrusted user input (e.g., from a prompt, an incoming message, or a malicious source) without proper sanitization or validation, it could lead to data exfiltration. An attacker could craft input that includes sensitive information (e.g., environment variables, local file contents if the agent has read access) which the agent then inadvertently sends to the external service. Implement strict input validation and sanitization for all data originating from untrusted sources before it is included in external HTTP requests. Ensure that the agent's internal logic prevents sensitive local data (e.g., environment variables, file contents) from being inadvertently concatenated or embedded into user-facing fields that are then sent to external services. | LLM | skill.md:79 |
Scan History
Embed Code
[](https://skillshield.io/report/4080f0a7b0e665f5)
Powered by SkillShield