Trust Assessment
breweries received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized User Input in Skill Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Unsanitized User Input in Skill Arguments The `SKILL.md` (untrusted content) provides instructions for the LLM to construct shell commands using user-provided arguments (e.g., `breweries search "name"`, `breweries city "city name"`). If the LLM directly interpolates untrusted user input into these command arguments without proper shell escaping, an attacker could inject arbitrary shell commands. For example, if a user provides `"; rm -rf /"` as a city name, and the LLM constructs `breweries city "; rm -rf /"`, it would execute `rm -rf /`. While the examples show quoted arguments, the instructions do not explicitly mandate robust escaping for all user-supplied strings, creating a vulnerability. The LLM should be explicitly instructed to always shell-escape or sanitize any user-provided input before incorporating it into shell commands. For example, using a function like `shlex.quote()` in Python or similar robust escaping mechanisms. The skill documentation should also emphasize this requirement for the LLM. | LLM | SKILL.md:52 |
Scan History
Embed Code
[](https://skillshield.io/report/e49d3608433ed3bb)
Powered by SkillShield