Trust Assessment
email-news-digest received a trust score of 50/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 2 critical, 3 high, 1 medium, and 0 low severity. Key findings include Hidden network beacons / undisclosed telemetry, Unsanitized user input in EMAIL_QUERY leads to command injection, Unsanitized user input in RECIPIENTS leads to command injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 3/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Unsanitized user input in EMAIL_QUERY leads to command injection The `EMAIL_QUERY` parameter, directly controlled by the user via command-line arguments, is passed unsanitized to the `gog gmail search` command. This allows an attacker to inject arbitrary shell commands by including metacharacters (e.g., `;`, `|`, `&`, `$(...)`) in the `--email-query` argument, potentially leading to arbitrary code execution on the host system. Sanitize or escape the `$EMAIL_QUERY` variable before passing it to `gog gmail search`. Consider using `printf %q` for shell escaping if `gog` expects a single argument, or ensure `gog` itself provides a safe way to pass query strings that are not interpreted as shell commands. | LLM | scripts/process_and_send.sh:48 | |
| CRITICAL | Unsanitized user input in RECIPIENTS leads to command injection The `RECIPIENTS` parameter, directly controlled by the user via command-line arguments, is passed unsanitized to the `gog gmail send` command. This allows an attacker to inject arbitrary shell commands by including metacharacters (e.g., `;`, `|`, `&`, `$(...)`) in the `--recipients` argument, potentially leading to arbitrary code execution on the host system. Sanitize or escape the `$RECIPIENTS` variable before passing it to `gog gmail send`. Ensure that the recipient list is properly validated and escaped to prevent shell metacharacter interpretation. | LLM | scripts/process_and_send.sh:105 | |
| HIGH | Hidden network beacons / undisclosed telemetry Command output piped through base64 encoding Remove undisclosed network calls and telemetry. All outbound communication should be documented and necessary for the skill's stated purpose. BCC injection in email tools is almost always malicious. | Manifest | skills/matthewxfz3/email-news-digest/scripts/process_and_send.sh:4 | |
| HIGH | Untrusted email content will be used as LLM prompt without sanitization (Future Risk) The `SKILL.md` explicitly states that the `summarize_content.py` script, which currently processes untrusted email body (`EMAIL_BODY_DECODED`), will be updated to integrate a Large Language Model (LLM). When this integration occurs, the raw, untrusted email content will be directly fed to the LLM as a prompt. This creates a severe prompt injection vulnerability, allowing malicious email content to manipulate the LLM's behavior, potentially leading to data exfiltration, unauthorized actions, or generation of harmful content. Implement robust sanitization and input validation for all untrusted content before it is passed to an LLM. Use techniques like prompt templating, input filtering, and output parsing to constrain LLM behavior. Consider using a separate, isolated LLM for untrusted inputs or a content moderation layer. | LLM | SKILL.md:30 | |
| HIGH | Untrusted LLM-generated content will be embedded into HTML email without escaping (Future Risk) The `SKILL.md` indicates future LLM integration for summarization. The `process_and_send.sh` script then takes the summary output (which would be derived from untrusted email content) and directly embeds it into an HTML email template using `sed` replacements. If the LLM generates malicious HTML tags or attributes (e.g., `<script>`, `<iframe>`, `onerror` attributes) in response to a prompt injection, these will be directly included in the final HTML email. This could lead to Cross-Site Scripting (XSS) vulnerabilities in email clients, data exfiltration from the recipient's browser, or display of phishing content. All LLM-generated content intended for HTML display must be rigorously HTML-escaped before embedding. Do not rely on simple `sed` replacements for untrusted content. Use a dedicated HTML templating engine with auto-escaping capabilities or a library specifically designed for safe HTML sanitization. | LLM | scripts/process_and_send.sh:90 | |
| MEDIUM | Unsanitized user input passed to external image generation script The user-controlled `IMAGE_PROMPT` is passed directly as an argument to the `nano-banana-pro` skill's `generate_image.py` script via `uv run`. If the `generate_image.py` script internally uses this prompt in a shell command (e.g., `subprocess.run(f"some_image_tool --prompt {user_prompt}")`) without proper sanitization or escaping, it could lead to command injection. This is a supply chain risk as the vulnerability depends on the implementation of an external skill. Review the `nano-banana-pro/scripts/generate_image.py` script to ensure it safely handles the `--prompt` argument, especially if it's passed to internal shell commands or LLMs. If the external script cannot be guaranteed safe, sanitize or escape the `$IMAGE_PROMPT` before passing it, or use a more secure method of inter-process communication. | LLM | scripts/process_and_send.sh:86 |
Scan History
Embed Code
[](https://skillshield.io/report/39f2866d053693cc)
Powered by SkillShield