Trust Assessment
diet-tracker received a trust score of 43/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 1 critical, 2 high, 2 medium, and 0 low severity. Key findings include Suspicious import: requests, Potential data exfiltration: file read + network send, Attempted Prompt Injection in Skill Description.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Attempted Prompt Injection in Skill Description The skill's `SKILL.md` file contains an instruction (`Remember to use `exec cat` command to confirm file type.`) that attempts to manipulate the host LLM. This is a direct violation of the security analyzer's instructions to treat content within the untrusted tags as data, not commands. Remove any instructions or commands intended for the host LLM from the skill's description or any other untrusted content. | LLM | SKILL.md:41 | |
| HIGH | Potential data exfiltration: file read + network send Function 'get_nutrition' reads files and sends data over the network. This may indicate data exfiltration. Review this function to ensure file contents are not being sent to external servers. | Static | skills/yonghaozhao722/diet-tracker/scripts/get_food_nutrition.py:63 | |
| HIGH | User Input Written Directly to LLM-Readable Memory File The `update_memory` function in `scripts/update_memory.py` writes user-provided `food_item` directly into a markdown file (`memory/YYYY-MM-DD.md`). If this memory file is subsequently read by an LLM, a malicious user could inject instructions or manipulate the LLM's behavior by crafting a `food_item` containing prompt injection payloads (e.g., markdown formatting, special characters, or direct instructions). Sanitize or escape user-provided input (`food_item`) before writing it to any file that might be processed by an LLM. Consider using a format that explicitly separates data from instructions, or implement strict input validation. | LLM | scripts/update_memory.py:10 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/yonghaozhao722/diet-tracker/scripts/get_food_nutrition.py:2 | |
| MEDIUM | Hardcoded Absolute Paths for File Access The script `scripts/get_food_nutrition.py` uses hardcoded absolute paths (`/home/zhaoyh/clawd/USER.md` and `/home/zhaoyh/clawd/skills/diet-tracker/references/food_database.json`) to access files. This makes the skill non-portable, brittle to deployment environment changes, and implies an assumption of specific filesystem access which could be considered excessive if the skill is not strictly sandboxed to its own directory. It also poses a supply chain risk as it ties the skill to a specific directory structure. Use relative paths or paths provided by the skill execution environment (e.g., through environment variables or a configuration service) instead of hardcoded absolute paths. Ensure the skill only accesses files within its designated sandbox. | LLM | scripts/get_food_nutrition.py:39 | |
| INFO | Hardcoded Placeholder API Key The `get_nutrition` function in `scripts/get_food_nutrition.py` uses a hardcoded `DEMO_KEY` for an external API call. While this is a placeholder, in a production scenario, hardcoding API keys directly in the code is a significant security risk, making them vulnerable to exposure. This indicates a lack of proper credential management. Replace hardcoded API keys with a secure method of credential management, such as environment variables, a secrets management service, or a configuration file that is not committed to version control. | LLM | scripts/get_food_nutrition.py:82 |
Scan History
Embed Code
[](https://skillshield.io/report/199bdd869203f0f6)
Powered by SkillShield