Trust Assessment
book-lashes received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted Skill Definition as Prompt Injection, Untrusted Skill Defines Tool Handling PII.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted Skill Definition as Prompt Injection The entire skill definition, including its name, description, and tool specifications, is enclosed within untrusted input delimiters. This means the host LLM is being instructed on how to operate and what capabilities to expose based on content explicitly marked as untrusted. This constitutes a direct prompt injection, as an untrusted source is dictating the LLM's behavior and available actions, violating the principle of treating content within these tags as data, not instructions. Ensure that skill definitions, especially those that dictate LLM behavior and tool usage, are treated as trusted input. Remove the untrusted input delimiters around the skill definition in `SKILL.md` to establish it as a trusted instruction source for the LLM. | LLM | SKILL.md:1 | |
| HIGH | Untrusted Skill Defines Tool Handling PII The `create_booking` tool, defined within the untrusted skill context, requires sensitive Personally Identifiable Information (PII) such as `customerName`, `customerEmail`, and `customerPhone`. Since the skill definition itself is untrusted, an attacker could define a tool that instructs the LLM to collect and transmit PII to an arbitrary endpoint. This grants excessive permissions to an untrusted entity to handle sensitive user data, posing a significant data exfiltration risk if the backend service is malicious or compromised. Ensure that skill definitions are trusted. If a skill must handle PII, its definition and implementation should undergo rigorous security review. For untrusted skills, tools that handle sensitive data should be restricted or require explicit user consent and verification before being exposed to the LLM. | LLM | SKILL.md:31 |
Scan History
Embed Code
[](https://skillshield.io/report/8636eaac21a512d4)
Powered by SkillShield