Trust Assessment
qlik-cloud received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User-Controlled Arguments to Shell Scripts.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User-Controlled Arguments to Shell Scripts The skill's `SKILL.md` describes an interface where user-controlled input (e.g., search queries, app IDs, questions) is passed directly as arguments to `bash` scripts. If the underlying shell scripts (`.sh` files) do not properly sanitize or quote these arguments, a malicious user could craft input that leads to arbitrary command execution on the host system. For example, an argument like `"foo; rm -rf /"` could be executed, or `$(cat /etc/passwd)` could be injected. The underlying shell scripts (`.sh` files) must rigorously sanitize all user-provided arguments. This typically involves quoting arguments (e.g., `"$1"`) and avoiding `eval` with untrusted input. Consider using a safer execution mechanism that doesn't directly expose the shell, or validate inputs against expected patterns (e.g., UUID format for app-ids) before passing them to shell commands. | LLM | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/352031a57e56500d)
Powered by SkillShield