Trust Assessment
shitty-email received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized Variables in Shell Commands.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Unsanitized Variables in Shell Commands The skill demonstrates the use of `curl` commands that incorporate variables such as `{token}` and `{email_id}`. If the Large Language Model (LLM) directly interpolates user-provided input or untrusted data into these variables without proper shell escaping or sanitization, an attacker could inject arbitrary shell commands. For example, if `{token}` or `{email_id}` contains shell metacharacters (e.g., `"; rm -rf /"`), it could lead to the execution of malicious commands on the host system. This is a common risk when skills rely on shell execution with dynamic inputs. The LLM implementation should ensure all variables interpolated into shell commands are properly escaped or sanitized to prevent shell metacharacter interpretation. For example, using `printf %q` in bash or a dedicated shell escaping library for the language implementing the LLM's execution environment. Alternatively, if possible, use a programmatic HTTP client instead of `curl` to avoid shell execution entirely for API calls. | LLM | SKILL.md:49 |
Scan History
Embed Code
[](https://skillshield.io/report/5372250e9fe41e9b)
Powered by SkillShield