Security Audit
lawvable/awesome-legal-skills:skills/politique-lanceur-alerte-malik-taiar
github.com/lawvable/awesome-legal-skillsTrust Assessment
lawvable/awesome-legal-skills:skills/politique-lanceur-alerte-malik-taiar received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via untrusted PDF content, Prompt Injection via untrusted Markdown reference files.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 26, 2026 (commit 4d82d4cf). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Prompt Injection via untrusted PDF content The skill explicitly instructs the LLM to 'read IN FULL' the file `assets/Decret_2022_1284.pdf`. Since the entire skill package context, including this instruction, is treated as untrusted input, any malicious instructions embedded within the `Decret_2022_1284.pdf` file could be interpreted and executed by the host LLM. This creates a direct vector for prompt injection, allowing an attacker to manipulate the LLM's behavior by crafting the content of the PDF. Avoid instructing the LLM to directly parse or 'read in full' untrusted binary files (like PDFs) for instructions or critical content. If content from such files is necessary, it should be pre-processed, sanitized, and only specific, trusted text extracted and provided to the LLM. Alternatively, the PDF content should be explicitly marked as data, not instructions. | LLM | SKILL.md:150 | |
| HIGH | Prompt Injection via untrusted Markdown reference files The skill instructs the LLM to 'Assess the system systematically using the references', which include several Markdown files (`DECRET_PROCEDURE.md`, `RGPD_CNIL.md`, `FONCTION_PUBLIQUE.md`, `VIGILANCE.md`, `TEXTES_LEGAUX.md`). As the entire skill package context is untrusted, any malicious instructions embedded within these Markdown files could be interpreted and executed by the host LLM. This provides a clear vector for prompt injection, allowing an attacker to manipulate the LLM's behavior by crafting the content of these reference files. Treat all referenced files as untrusted input. Implement strict parsing or content filtering if these files are meant to contain instructions. Clearly delineate trusted instructions from data, and ensure that content from untrusted files is processed only as data, not as executable commands or instructions for the LLM. | LLM | SKILL.md:159 |
Scan History
Embed Code
[](https://skillshield.io/report/8abcfc682d9b89e5)
Powered by SkillShield