Trust Assessment
recipes received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User Input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User Input The skill's `recipes` command, specifically the `search` subcommand, is designed to accept arbitrary user-provided strings as a 'query'. If this input is not rigorously sanitized or escaped before being passed to underlying shell commands (e.g., `curl` or `jq` executed via `bash`), it could enable command injection. An attacker could craft a malicious query containing shell metacharacters to execute arbitrary commands on the host system. The manifest explicitly lists `bash` as a required binary, confirming shell execution capabilities. Implement robust input validation and sanitization for all user-provided arguments, especially for the `search` command's `query` parameter. Ensure that arguments passed to shell commands are properly quoted and escaped (e.g., using `printf %q` in bash) to prevent arbitrary command execution. For other parameters like `area` and `meal_id`, consider strict type checking or whitelisting allowed values. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/d80cf70c335d673d)
Powered by SkillShield