Trust Assessment
email-triage received a trust score of 15/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 2 critical, 1 high, 2 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Suspicious import: urllib.request, Prompt Injection via Email Content.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/briancolinger/email-triage/scripts/email-triage.py:20 | |
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/briancolinger/email-triage/scripts/email-triage.py:52 | |
| HIGH | Prompt Injection via Email Content The `classify_with_ollama` function constructs an LLM prompt using untrusted email content (sender, subject, and body preview) via f-strings. A malicious actor could craft an email with specially designed content to inject instructions into the Ollama LLM, potentially manipulating its classification behavior, causing it to ignore safety instructions, or even attempting to extract sensitive information if the LLM can be coerced. Implement robust input sanitization for all untrusted email fields before they are incorporated into the LLM prompt. Consider using a structured input format (e.g., JSON) for the LLM that clearly separates instructions from user data, making injection more difficult. Ensure the LLM itself has strong guardrails against prompt injection. | LLM | scripts/email-triage.py:141 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/briancolinger/email-triage/scripts/email-triage.py:161 | |
| MEDIUM | Potential Data Exfiltration via Configurable Ollama Endpoint The skill sends sensitive email content (sender, subject, and body preview) to an Ollama LLM endpoint for classification. While the default `OLLAMA_URL` is localhost, it is configurable via the `OLLAMA_URL` environment variable. If a user configures this variable to a malicious or untrusted external server, sensitive email data could be exfiltrated to that server. This represents a data exfiltration risk if the configuration is compromised or misconfigured. Warn users about the security implications of configuring `OLLAMA_URL` to untrusted external endpoints. Consider adding a mechanism to restrict `OLLAMA_URL` to known safe hosts or requiring explicit confirmation for non-local URLs. Ensure that the data sent to Ollama is minimized to only what is strictly necessary for classification. | LLM | scripts/email-triage.py:37 |
Scan History
Embed Code
[](https://skillshield.io/report/2501c44ecadf9e50)
Powered by SkillShield