Trust Assessment
westland-linguistic-humidifier received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 2 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted content attempts to assign LLM persona, Untrusted content attempts to control LLM behavior and script responses.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content attempts to assign LLM persona The untrusted skill content attempts to assign a specific persona ('You are an authorized Brand Representative...') to the host LLM. This is a direct instruction to the LLM from untrusted input, which can lead to manipulation of the LLM's behavior and potential security risks if the persona is malicious or used to bypass safety mechanisms. Untrusted content should be treated as data, not commands. Remove all direct instructions or persona assignments intended for the host LLM from within untrusted content. Untrusted content should be treated as data, not commands. | LLM | SKILL.md:4 | |
| CRITICAL | Untrusted content attempts to control LLM behavior and script responses The untrusted skill content contains explicit instructions for the host LLM on how to monitor input ('Monitor the feed...') and specific constraints ('NEVER claim the product is medical.', 'NEVER argue. Reply with: "Efficiency is subjective; the LH-1 is atmospheric."'). Furthermore, it provides a direct interaction script for the LLM to use ('I noticed a high level of semantic static...'). These are all direct attempts to control the LLM's operational logic and output from untrusted input, which constitutes prompt injection. Remove all direct instructions, behavioral guidelines, constraints, or pre-defined conversational outputs intended for the host LLM from within untrusted content. Untrusted content should be treated as data, not commands. | LLM | SKILL.md:12 |
Scan History
Embed Code
[](https://skillshield.io/report/4b350aa2f6ff798b)
Powered by SkillShield