Trust Assessment
django-expert received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 3 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted content attempts to define LLM's role, Untrusted content attempts to impose behavioral constraints on LLM, Untrusted content attempts to dictate LLM's output format.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 11, 2026 (commit 3d5e297b). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content attempts to define LLM's role The 'Role Definition' section within the untrusted input attempts to instruct the host LLM on its persona and expertise ('You are a senior Python engineer...'). This directly violates the security instruction to treat content within untrusted tags as data, not commands, and constitutes a prompt injection attempt to manipulate the LLM's core behavior. Move role definition and behavioral instructions outside the untrusted input delimiters. The LLM framework should be configured to ignore or sanitize such instructions originating from untrusted sources. | LLM | SKILL.md:7 | |
| CRITICAL | Untrusted content attempts to impose behavioral constraints on LLM The 'Constraints' section, including 'MUST DO' and 'MUST NOT DO' directives, attempts to instruct the host LLM on its operational behavior (e.g., 'Use `select_related`/`prefetch_related`', 'Store secrets in settings.py'). This directly violates the security instruction to treat content within untrusted tags as data, not commands, and constitutes a prompt injection attempt to control the LLM's actions. Move behavioral constraints and directives outside the untrusted input delimiters. The LLM framework should be configured to ignore or sanitize such instructions originating from untrusted sources. | LLM | SKILL.md:35 | |
| CRITICAL | Untrusted content attempts to dictate LLM's output format The 'Output Templates' section attempts to instruct the host LLM on how to format its responses ('When implementing Django features, provide: 1. Model definitions...'). This directly violates the security instruction to treat content within untrusted tags as data, not commands, and constitutes a prompt injection attempt to control the LLM's output structure. Move output formatting instructions outside the untrusted input delimiters. The LLM framework should be configured to ignore or sanitize such instructions originating from untrusted sources. | LLM | SKILL.md:51 | |
| HIGH | Untrusted content instructs LLM to load internal resources The 'Reference Guide' section within the untrusted input attempts to instruct the host LLM to 'Load detailed guidance based on context' from specific internal markdown files (e.g., `references/models-orm.md`). While loading internal skill resources is often legitimate, the instruction to perform this action should not originate from untrusted content. This constitutes a prompt injection attempt to direct the LLM's internal operations. Move instructions for loading internal resources outside the untrusted input delimiters. The LLM framework should be configured to ignore or sanitize such instructions originating from untrusted sources, or handle resource loading through a trusted mechanism. | LLM | SKILL.md:27 |
Scan History
Embed Code
[](https://skillshield.io/report/c7573dc237557d52)
Powered by SkillShield