Trust Assessment
moltbot-plugin-2do received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 0 high, 2 medium, and 1 low severity. Key findings include Unpinned npm dependency version, Handles sensitive SMTP credentials, User-controlled input printed to console without sanitization.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Unpinned npm dependency version Dependency 'nodemailer' is not pinned to an exact version ('^7.0.13'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/chuckiefan/moltbot-plugin-2do/package.json | |
| MEDIUM | Handles sensitive SMTP credentials The skill requires and directly uses `SMTP_PASS` (SMTP password) from environment variables to authenticate with an SMTP server. While necessary for its intended functionality (sending emails), the handling of such a sensitive credential introduces a risk. A malicious modification to the skill's code could lead to the exfiltration or misuse of this password. The `SMTP_USER`, `SMTP_HOST`, and `SMTP_PORT` are also handled similarly. Ensure the environment where the skill runs is highly secured. Implement strict access controls and monitoring for any outbound connections from the skill beyond the configured SMTP server. Consider using more secure credential management systems if available in the execution environment (e.g., secret managers) instead of raw environment variables. | LLM | src/config.ts:22 | |
| LOW | User-controlled input printed to console without sanitization The skill prints the `task.title`, which is derived from user input, directly to the console using `console.log`. If the user input contains terminal escape sequences (e.g., ANSI codes), it could potentially manipulate the terminal display or execute commands in certain vulnerable terminal emulators. In an LLM agent context, this risk is generally low as the LLM typically processes raw text output, but it is a best practice to sanitize output before printing to a console. Sanitize `task.title` before printing it to the console, for example, by stripping non-printable characters or escaping known escape sequence prefixes. | LLM | src/main.ts:54 |
Scan History
Embed Code
[](https://skillshield.io/report/7ddb56c96cc2f50e)
Powered by SkillShield