Trust Assessment
aluvia received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via aluvia-sdk arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via aluvia-sdk arguments The skill is declared with `Bash(aluvia-sdk:*)` permissions, allowing it to execute the `aluvia-sdk` command with arbitrary arguments via a shell. The `SKILL.md` prominently features `aluvia-sdk open <url>` as a core command. If the skill constructs the `<url>` argument (or any other argument) from untrusted user input without proper sanitization or shell escaping, a malicious user could inject shell metacharacters (e.g., `;`, `&&`, `|`, `` ` ``) to execute arbitrary commands on the host system. This vulnerability exists because the `Bash` permission allows direct shell execution of `aluvia-sdk` and its arguments. When constructing `aluvia-sdk` commands with user-provided input (especially URLs or other string arguments), ensure all input is strictly validated and properly shell-escaped before being passed to the `aluvia-sdk` command. Ideally, the skill should use a safe command execution mechanism that passes arguments as a list rather than a single shell string, preventing shell interpretation of input. | LLM | SKILL.md:15 |
Scan History
Embed Code
[](https://skillshield.io/report/267cf601b54f2b53)
Powered by SkillShield