Trust Assessment
service-booking received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted instructions attempt to manipulate LLM behavior, Untrusted instruction to collect Personally Identifiable Information (PII).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted instructions attempt to manipulate LLM behavior The 'Rules' section within the untrusted skill content contains explicit instructions intended to guide the host LLM's behavior, such as 'Never book without confirmation', 'Show pricing upfront', 'Collect required info', and 'Default to user's ZIP'. Since this content is marked as untrusted, these are direct attempts to inject instructions into the LLM's operational logic, potentially leading to unauthorized actions or data handling if the LLM follows them. Move all behavioral instructions for the LLM to a trusted part of the skill definition (e.g., the manifest or a trusted system prompt). Untrusted content should never dictate LLM behavior. | LLM | SKILL.md:105 | |
| HIGH | Untrusted instruction to collect Personally Identifiable Information (PII) The 'Rules' section, located within the untrusted input, explicitly instructs the LLM to 'Collect required info — Name, email, phone before booking'. This is a prompt injection attempt that, if successful, would cause the LLM to solicit and collect sensitive Personally Identifiable Information (PII) from the user based on untrusted directives. While the `create_booking` tool legitimately requires this PII, the instruction to collect it should originate from a trusted source. Ensure that instructions for collecting PII are part of the trusted skill definition or system prompt, not derived from untrusted skill content. The LLM should be explicitly instructed on PII handling and consent from a trusted source. | LLM | SKILL.md:108 |
Scan History
Embed Code
[](https://skillshield.io/report/c18c74a6e3afb36b)
Powered by SkillShield