Trust Assessment
gourmet-spicy-food-lafeitu received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Suspicious import: requests, Untrusted Skill Dictates LLM Behavior, Direct Storage and Transmission of User Credentials.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted Skill Dictates LLM Behavior The `SKILL.md` file, which is treated as untrusted input, contains explicit instructions and directives for the host LLM. Examples include 'To provide the most accurate and efficient experience, follow this priority sequence:' and 'If you have browser tools (like `open_browser_url`), you **MUST** immediately open the registration page for the user using that URL.' This attempts to manipulate the LLM's operational logic and tool usage, which is a direct prompt injection. Skill documentation should describe the skill's capabilities and expected inputs/outputs, not issue commands or instructions to the host LLM. Remove all imperative statements directed at the LLM. | LLM | SKILL.md:20 | |
| HIGH | Direct Storage and Transmission of User Credentials The `lafeitu_client.py` script stores user account and password directly in a local file (`~/.lafeitu_creds.json`) and transmits them in custom HTTP headers (`x-user-account`, `x-user-password`) with every API request. While file permissions are set to `0o600`, storing credentials in this manner on the filesystem is vulnerable to local compromise. Transmitting them in custom headers rather than using standard, more secure authentication mechanisms (like OAuth tokens or standard Authorization headers) increases the risk of exposure, especially in an AI agent environment where the LLM is instructed to handle these credentials directly. Implement a more secure authentication mechanism. Instead of storing raw credentials, consider using short-lived API tokens, OAuth flows, or integrating with a secure credential management system. If direct credentials are unavoidable, ensure they are encrypted at rest and transmitted over HTTPS using standard `Authorization` headers with appropriate token types. Avoid instructing the LLM to handle raw passwords directly. | LLM | scripts/lib/commerce_client.py:69 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/nowloady/agentic-spicy-food/scripts/lib/commerce_client.py:1 |
Scan History
Embed Code
[](https://skillshield.io/report/967654c712434cf3)
Powered by SkillShield