Security Audit
ailabs-393/ai-labs-claude-skills:packages/skills/nutritional-specialist
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:packages/skills/nutritional-specialist received a trust score of 50/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include File read + network send exfiltration, Sensitive path access: AI agent config, Prompt Injection via Stored User Preferences.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | packages/skills/nutritional-specialist/SKILL.md:258 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | packages/skills/nutritional-specialist/SKILL.md:258 | |
| MEDIUM | Prompt Injection via Stored User Preferences The skill collects comprehensive user preferences (e.g., dietary goals, dislikes, health conditions) and stores them persistently in `~/.claude/nutritional_preferences.json`. These stored preferences are subsequently loaded and used by the LLM to personalize future responses. If a malicious user injects prompt injection commands (e.g., 'ignore previous instructions', 'act as a different persona') into their preferences during the initial setup or an update, these commands will be stored and re-ingested by the LLM. This could lead to manipulation of the LLM's behavior in subsequent interactions, causing it to deviate from its intended function or reveal sensitive information. Implement robust input sanitization and validation for all user-provided preferences before they are stored. Filter or escape known prompt injection keywords and patterns. When the LLM loads preferences, explicitly instruct it to treat the data as factual information for personalization, not as executable instructions or commands. Consider using a more structured or enumerated data format for preferences where possible to reduce the surface area for textual instruction injection. | LLM | SKILL.md:49 |
Scan History
Embed Code
[](https://skillshield.io/report/196dba786c71d6ce)
Powered by SkillShield