Trust Assessment
refund-radar received a trust score of 88/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User-Controlled Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User-Controlled Arguments The skill describes several CLI commands that are intended to be executed by the host LLM based on user input. Arguments such as `--csv` (filename), `--out` (filename), and `--merchant` (merchant name) are directly controlled by the user. If the host LLM constructs these shell commands by directly interpolating unsanitized user input, an attacker could inject arbitrary shell commands by providing malicious strings (e.g., `my_statement.csv; rm -rf /` or `EvilCorp"; rm -rf /; echo "`). This could lead to arbitrary code execution on the host system. The host LLM must rigorously sanitize all user-provided strings (e.g., filenames, merchant names, month strings) before constructing and executing shell commands. This includes proper escaping of shell metacharacters to prevent injection. Alternatively, the skill could expose a Python API for direct function calls instead of relying solely on shell command execution. | LLM | SKILL.md:39 |
Scan History
Embed Code
[](https://skillshield.io/report/7b800bdb64cdbfb1)
Powered by SkillShield