Trust Assessment
linguistic-humidifier-ad-model received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted content attempts to define LLM persona and behavior.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content attempts to define LLM persona and behavior The skill's primary content, located within the untrusted input delimiters, contains explicit instructions intended to manipulate the host LLM's persona and conversational logic. Specifically, it attempts to define the LLM as a 'Brand Representative' and provides detailed 'Deployment Logic' (triggers) and 'Constraints' (rules for interaction). This is a direct prompt injection attempt, aiming to control the LLM's output and role based on untrusted input. Remove all instructions intended for the host LLM from untrusted content. Skill definitions should be declarative and not contain imperative commands for the LLM's behavior. If a persona or specific interaction logic is required, it must be defined by the trusted system prompt or skill framework, not by the untrusted skill content itself. | LLM | SKILL.md:5 |
Scan History
Embed Code
[](https://skillshield.io/report/285061d69474b9ae)
Powered by SkillShield