Trust Assessment
email-security received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 3 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Untrusted Input in Script Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Untrusted Input in Script Arguments The skill instructs the AI agent to execute Python scripts via shell commands, passing untrusted email content (sender email, authentication headers, email body) directly as arguments. If the agent constructs these commands using string interpolation and executes them with a shell (e.g., `subprocess.run(..., shell=True)`), a malicious email sender could inject arbitrary shell commands by crafting special characters into the email address, headers, or body text. Although the Python scripts use `argparse` which is safe once arguments are parsed, the vulnerability exists in the command construction and execution phase by the agent. Instruct the AI agent to execute external commands using `subprocess.run()` with `shell=False` and pass arguments as a list of strings. This prevents shell metacharacters in untrusted input from being interpreted as commands. For example, `subprocess.run(['python', 'scripts/verify_sender.py', '--email', email_variable, '--config', 'references/owner-config.md'])`. | LLM | SKILL.md:50 | |
| HIGH | Potential Command Injection via Untrusted Input in Script Arguments The skill instructs the AI agent to execute Python scripts via shell commands, passing untrusted email content (sender email, authentication headers, email body) directly as arguments. If the agent constructs these commands using string interpolation and executes them with a shell (e.g., `subprocess.run(..., shell=True)`), a malicious email sender could inject arbitrary shell commands by crafting special characters into the email address, headers, or body text. Although the Python scripts use `argparse` which is safe once arguments are parsed, the vulnerability exists in the command construction and execution phase by the agent. This specific example shows passing a JSON string for headers, which could also be exploited. Instruct the AI agent to execute external commands using `subprocess.run()` with `shell=False` and pass arguments as a list of strings. This prevents shell metacharacters in untrusted input from being interpreted as commands. For example, `subprocess.run(['python', 'scripts/verify_sender.py', '--email', email_variable, '--config', 'references/owner-config.md', '--headers', json_headers_string])`. | LLM | SKILL.md:53 | |
| HIGH | Potential Command Injection via Untrusted Input in Script Arguments The skill instructs the AI agent to execute Python scripts via shell commands, passing untrusted email content (sender email, authentication headers, email body) directly as arguments. If the agent constructs these commands using string interpolation and executes them with a shell (e.g., `subprocess.run(..., shell=True)`), a malicious email sender could inject arbitrary shell commands by crafting special characters into the email address, headers, or body text. Although the Python scripts use `argparse` which is safe once arguments are parsed, the vulnerability exists in the command construction and execution phase by the agent. This example specifically passes the email body as text. Instruct the AI agent to execute external commands using `subprocess.run()` with `shell=False` and pass arguments as a list of strings. This prevents shell metacharacters in untrusted input from being interpreted as commands. For example, `subprocess.run(['python', 'scripts/sanitize_content.py', '--text', email_body_variable])`. | LLM | SKILL.md:73 |
Scan History
Embed Code
[](https://skillshield.io/report/3022b47273455945)
Powered by SkillShield