Trust Assessment
fluxA-x402-payment received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 0 high, 1 medium, and 0 low severity. Key findings include Shell Command Injection via X-Payment Header, LLM-controlled input for API description field.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Shell Command Injection via X-Payment Header The `curl` command constructs its arguments by directly interpolating the `$PAYMENT_MANDATE` variable into a double-quoted string for the `X-PAYMENT` header. If the LLM-controlled `$PAYMENT_MANDATE` contains double quotes followed by shell metacharacters (e.g., `"; rm -rf /"`), it can break out of the header value and execute arbitrary shell commands on the host system. Use a more robust method for executing HTTP requests that prevents shell interpolation of variables, such as a dedicated HTTP client library in Node.js or Python. If shell execution is unavoidable, ensure `$PAYMENT_MANDATE` is strictly validated or properly escaped for shell execution (e.g., using `printf %q` in bash) to prevent arbitrary command injection. | LLM | SKILL.md:28 | |
| MEDIUM | LLM-controlled input for API description field The `mandate-create` command expects the LLM to fill in `"{what task}"` for the `--desc` argument. While this input is JSON-stringified before being sent to an API, if the API's backend or any downstream system processes this description using another LLM or displays it to a human without proper sanitization, a malicious prompt injected by the agent could manipulate that downstream LLM or deceive a user. Implement strict input validation and sanitization for the `desc` parameter, especially if it's intended for display or further processing by other LLMs or human users. Consider limiting the length and character set, or explicitly marking it as untrusted user-generated content for downstream systems. | LLM | SKILL.md:9 |
Scan History
Embed Code
[](https://skillshield.io/report/3d762a47ba5e25c7)
Powered by SkillShield