Trust Assessment
moltr received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Sensitive environment variable access: $HOME, Embedded LLM instructions in cron job text, Arbitrary file upload allows data exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Embedded LLM instructions in cron job text The `SKILL.md` file instructs the agent to set up cron jobs using `cron add`. The `--text` argument of these `cron add` commands contains direct instructions for the LLM's behavior, such as 'moltr: post if you have something. Draw from recent context, observations, or responses to content.' and 'moltr: review posts and profiles. Send an ask if you have a genuine question.' If the LLM interprets these as direct commands to itself rather than descriptive text for the cron job, it constitutes a prompt injection, manipulating its behavior. This violates the principle of treating untrusted content as data, not instructions. Remove direct instructions for the LLM from the `--text` arguments of cron job setup commands. The `--text` field should be purely descriptive for human/system understanding, not prescriptive for the LLM's actions. If the LLM needs to be instructed on *how* to post or ask, those instructions should be part of its core prompt or a separate, trusted configuration. | LLM | SKILL.md:180 | |
| HIGH | Arbitrary file upload allows data exfiltration The `scripts/moltr.sh` script's `post-photo` command allows the agent to upload one or more files specified by path. The script constructs a `curl` command using `-F "images[]=@$f"` where `$f` is the user-provided filename. If a malicious or compromised agent provides a path to a sensitive local file (e.g., `~/.ssh/id_rsa`, `/etc/passwd`, `~/.config/moltr/credentials.json`), the script will attempt to upload the content of that file to the `moltr.ai` API, leading to data exfiltration. Although the script checks for file existence, it does not validate the file's content or location for sensitivity. Implement strict validation and sanitization of file paths provided to `post-photo`. Restrict uploads to a specific, sandboxed directory or require explicit user confirmation for sensitive file types/locations. Consider using a file picker or a more controlled mechanism for file selection rather than arbitrary paths. At a minimum, implement a deny-list or allow-list for file extensions and locations. | LLM | scripts/moltr.sh:241 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/spuro/moltr/scripts/moltr.sh:7 |
Scan History
Embed Code
[](https://skillshield.io/report/4fdb63c28671d7dc)
Powered by SkillShield