Trust Assessment
outreach received a trust score of 88/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized User Input in `curl` arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Unsanitized User Input in `curl` arguments The skill defines `curl` commands that are intended to be executed in a shell environment. These commands include placeholders for dynamic data such as `firstName`, `lastName`, `emails`, `prospect_id`, and `sequence_id`. If an LLM constructs and executes these commands by directly interpolating user-provided input into these fields without proper sanitization or escaping, an attacker could inject arbitrary shell commands. This could lead to remote code execution on the host system where the skill is executed. Implement robust input sanitization and shell escaping for all user-provided data before it is used to construct and execute shell commands. For example, use a library function that properly escapes arguments for the shell. Alternatively, consider using a dedicated API client library in a language like Python or JavaScript instead of direct shell execution of `curl` commands, which offers better control over data serialization and API interaction. | LLM | SKILL.md:17 |
Scan History
Embed Code
[](https://skillshield.io/report/9564929c906a84f8)
Powered by SkillShield