Trust Assessment
maxxit-lazy-trading received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 4 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized User Input in curl -d.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Unsanitized User Input in curl -d The skill documentation provides `curl` command examples where user-controlled input (e.g., 'message', 'address') is directly embedded into JSON payloads within the `-d` argument. If the host LLM constructs these `curl` commands by directly interpolating user input without proper shell or JSON escaping, a malicious user could inject arbitrary shell commands. For example, by providing input like `foo' ; evil_command ; 'bar` or `0x..." ; evil_command ; echo "`, an attacker could execute commands on the host system. The host LLM's execution environment must ensure that all user-provided data interpolated into shell commands, especially within JSON strings passed to `curl -d`, is properly escaped. This involves both JSON escaping the string content and then shell escaping the entire JSON payload argument. A more robust solution is to use `curl -d @-` and pipe the JSON payload via standard input, or use a dedicated HTTP client library that handles argument serialization securely. | LLM | SKILL.md:64 | |
| HIGH | Potential Command Injection via Unsanitized User Input in curl -d The skill documentation provides `curl` command examples where user-controlled input (e.g., 'message', 'address') is directly embedded into JSON payloads within the `-d` argument. If the host LLM constructs these `curl` commands by directly interpolating user input without proper shell or JSON escaping, a malicious user could inject arbitrary shell commands. For example, by providing input like `foo' ; evil_command ; 'bar` or `0x..." ; evil_command ; echo "`, an attacker could execute commands on the host system. The host LLM's execution environment must ensure that all user-provided data interpolated into shell commands, especially within JSON strings passed to `curl -d`, is properly escaped. This involves both JSON escaping the string content and then shell escaping the entire JSON payload argument. A more robust solution is to use `curl -d @-` and pipe the JSON payload via standard input, or use a dedicated HTTP client library that handles argument serialization securely. | LLM | SKILL.md:94 | |
| HIGH | Potential Command Injection via Unsanitized User Input in curl -d The skill documentation provides `curl` command examples where user-controlled input (e.g., 'message', 'address') is directly embedded into JSON payloads within the `-d` argument. If the host LLM constructs these `curl` commands by directly interpolating user input without proper shell or JSON escaping, a malicious user could inject arbitrary shell commands. For example, by providing input like `foo' ; evil_command ; 'bar` or `0x..." ; evil_command ; echo "`, an attacker could execute commands on the host system. The host LLM's execution environment must ensure that all user-provided data interpolated into shell commands, especially within JSON strings passed to `curl -d`, is properly escaped. This involves both JSON escaping the string content and then shell escaping the entire JSON payload argument. A more robust solution is to use `curl -d @-` and pipe the JSON payload via standard input, or use a dedicated HTTP client library that handles argument serialization securely. | LLM | SKILL.md:118 | |
| HIGH | Potential Command Injection via Unsanitized User Input in curl -d The skill documentation provides `curl` command examples where user-controlled input (e.g., 'message', 'address') is directly embedded into JSON payloads within the `-d` argument. If the host LLM constructs these `curl` commands by directly interpolating user input without proper shell or JSON escaping, a malicious user could inject arbitrary shell commands. For example, by providing input like `foo' ; evil_command ; 'bar` or `0x..." ; evil_command ; echo "`, an attacker could execute commands on the host system. The host LLM's execution environment must ensure that all user-provided data interpolated into shell commands, especially within JSON strings passed to `curl -d`, is properly escaped. This involves both JSON escaping the string content and then shell escaping the entire JSON payload argument. A more robust solution is to use `curl -d @-` and pipe the JSON payload via standard input, or use a dedicated HTTP client library that handles argument serialization securely. | LLM | SKILL.md:142 |
Scan History
Embed Code
[](https://skillshield.io/report/9f2d389f11ba3344)
Powered by SkillShield