Trust Assessment
mcdonald received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include Explicit Instruction for Shell Execution (Prompt Injection Surface), Command Injection via MCD_MCP_URL, Data Exfiltration via Malicious MCD_MCP_URL.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 18/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Explicit Instruction for Shell Execution (Prompt Injection Surface) The untrusted `SKILL.md` explicitly instructs the host LLM to '使用 exec 工具执行 curl 命令调用 MCP 服务' (Use exec tool to execute curl command to call MCP service). This instruction itself is a form of prompt injection, as it directs the LLM to perform a high-privilege action (shell execution) based on untrusted input. This opens the door for subsequent command injection and data exfiltration vulnerabilities if the LLM is further manipulated to construct malicious shell commands. Avoid instructing the LLM to directly execute shell commands from untrusted skill definitions. Instead, provide a structured API or tool interface that the LLM can call, where arguments are strictly typed and validated, and shell execution is abstracted away and handled securely by the host environment. | LLM | SKILL.md:27 | |
| CRITICAL | Command Injection via MCD_MCP_URL The skill instructs the LLM to execute `curl` commands using an `exec` tool. The `MCD_MCP_URL` variable, which is an optional configuration, is directly interpolated into the `curl` command string without proper sanitization. An attacker who can influence the `MCD_MCP_URL` (e.g., by manipulating the LLM to set this environment variable) can inject arbitrary shell commands, leading to remote code execution on the host system. Implement strict input validation and sanitization for `MCD_MCP_URL` to prevent shell metacharacters. If `MCD_MCP_URL` is intended to be user-configurable, ensure it is validated as a URL and passed to `curl` in a way that prevents shell injection (e.g., using a dedicated `curl` library function that handles arguments safely, or by escaping all shell metacharacters). | LLM | SKILL.md:29 | |
| HIGH | Data Exfiltration via Malicious MCD_MCP_URL The skill allows the `MCD_MCP_URL` to be configured. If an attacker can manipulate this URL (e.g., through prompt injection to the LLM), they can redirect all API requests, including the `MCD_TOKEN` in the `Authorization` header and any request body data, to an attacker-controlled server. This results in the exfiltration of sensitive user credentials and data. Restrict `MCD_MCP_URL` to a whitelist of trusted domains or implement robust validation to ensure it points only to legitimate McDonald's API endpoints. Prevent the LLM from setting this variable based on untrusted user input. | LLM | SKILL.md:29 | |
| MEDIUM | Potential Credential Exposure of MCD_TOKEN The `MCD_TOKEN` is a sensitive API key used for authentication. The skill's instructions suggest that the LLM might replace a placeholder `<YOUR_TOKEN>` with a user-provided value ('或在调用时替换 <YOUR_TOKEN>'). If the LLM is manipulated to accept a token from untrusted input, or if the `curl` command containing the token (even from an environment variable) is logged or displayed in an insecure manner, it could lead to the exposure and harvesting of the `MCD_TOKEN`. Ensure that `MCD_TOKEN` is always sourced from secure environment variables or a secure secret management system, and never directly from user input. Prevent the LLM from logging or displaying the full `curl` command or its headers when sensitive credentials are present. | LLM | SKILL.md:22 |
Scan History
Embed Code
[](https://skillshield.io/report/8271fd47a2f10fee)
Powered by SkillShield