Trust Assessment
binance-pay received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized String Construction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Unsanitized String Construction The skill documentation demonstrates constructing shell command strings, particularly for `PAYLOAD` variables, using patterns that are vulnerable to command injection if placeholders or command substitutions are replaced by untrusted user input without proper shell escaping. Examples include `"<ORDER_ID>"`, `"<PREPAY_ID>"`, and the use of `"'""$(date +%s)""'"` (which could be replaced by `"'""$(UNTRUSTED_INPUT)""'"`). If an LLM generates executable code based on these patterns and inserts unescaped user input, it could lead to arbitrary command execution on the host system. When generating shell commands from user input, ensure all user-provided values are properly shell-escaped before being embedded into command strings. For JSON payloads, use a JSON library or a robust escaping mechanism (e.g., `jq -n --arg key "$USER_INPUT" '{$key}'`) to prevent both shell injection and JSON injection. Instruct the LLM to use safe methods for string construction when dealing with user input. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/d0dcdc53628481f0)
Powered by SkillShield