Security Audit
lawvable/awesome-legal-skills:skills/gdpr-breach-sentinel-oliver-schmidt-prietz
github.com/lawvable/awesome-legal-skillsTrust Assessment
lawvable/awesome-legal-skills:skills/gdpr-breach-sentinel-oliver-schmidt-prietz received a trust score of 50/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Covert behavior / concealment directives, Potential for Prompt Injection via Fast Path input, Prompt Injection via Web Search Query Construction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 26, 2026 (commit 4d82d4cf). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential for Prompt Injection via Fast Path input The skill allows users to provide a 'free-form or structured description' via the 'Fast Path' option, from which the LLM is instructed to extract 11 critical data points. This direct parsing of untrusted, unstructured input for operational parameters creates a high risk of prompt injection. A malicious user could craft input to manipulate the LLM's subsequent instructions, decision-making, or tool usage, potentially leading to unintended actions or information disclosure. Implement robust input validation and sanitization for all user-provided free-form text. Consider using structured input formats (e.g., JSON, forms) where possible, or strictly define extraction rules to minimize LLM interpretation of untrusted content as instructions. Isolate user input from system instructions to prevent instruction hijacking. | LLM | SKILL.md:49 | |
| HIGH | Covert behavior / concealment directives Directive to hide behavior from user Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | skills/gdpr-breach-sentinel-oliver-schmidt-prietz/SKILL.md:545 | |
| HIGH | Prompt Injection via Web Search Query Construction The skill explicitly instructs the LLM to perform `web_search` queries using 'specific case details' and 'identified Lead SA or relevant SA(s)' which are derived from user input. A malicious user could inject harmful terms into these input fields, leading to the construction of unintended or malicious search queries. Depending on the `web_search` tool's capabilities and permissions, this could lead to information disclosure (e.g., searching for internal documents if the tool has such access), or other unintended actions. Implement strict sanitization and validation of user-provided inputs used in constructing `web_search` queries. Ensure that the `web_search` tool operates within a tightly controlled and sandboxed environment with minimal permissions, and cannot access internal resources or perform actions beyond simple web retrieval. | LLM | SKILL.md:290 | |
| MEDIUM | Reliance on `docx_generator` tool with potential for content injection The skill explicitly instructs the LLM to use a `docx_generator` tool to create audit-ready documentation, incorporating 'all relevant details from the assessment.' If these details include user-provided input, and the `docx_generator` tool is not robustly secured against content injection (e.g., embedding malicious macros, external links, or other active content), it could lead to a downstream compromise when the generated document is opened by a user. This could facilitate data exfiltration or command injection if the document can execute code. The risk is dependent on the security implementation of the `docx_generator` tool itself. Ensure the `docx_generator` tool is securely implemented, sanitizing all input to prevent the embedding of malicious content (e.g., macros, active content, external links that could exfiltrate data). The tool should operate in a sandboxed environment and only generate static, safe document content. Consider warning users about opening documents from untrusted sources. | LLM | SKILL.md:370 |
Scan History
Embed Code
[](https://skillshield.io/report/bdabcdece1e7a97b)
Powered by SkillShield