Security Audit
Jamkris/everything-gemini-code:skills/django-security
github.com/Jamkris/everything-gemini-codeTrust Assessment
Jamkris/everything-gemini-code:skills/django-security received a trust score of 43/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include SQL Injection Vulnerability Example, Cross-Site Scripting (XSS) Vulnerability Example.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 30, 2026 (commit 6c6f43aa). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | SQL Injection Vulnerability Example The skill package contains an explicit example of a SQL injection vulnerability. While marked as 'VULNERABLE!', an AI agent might extract and use this code snippet if not carefully instructed, leading to severe database compromise. The example demonstrates direct string interpolation into a raw SQL query without proper sanitization. Consider removing explicit examples of vulnerable code from the skill, or encapsulate them in a way that makes it impossible for an AI agent to mistakenly use them (e.g., in a dedicated 'anti-patterns' section with strong warnings, or by altering the code to be syntactically incorrect but illustrative). If kept, ensure the warning is extremely prominent and clear. | LLM | SKILL.md:160 | |
| HIGH | Cross-Site Scripting (XSS) Vulnerability Example The skill package includes an explicit example of a Cross-Site Scripting (XSS) vulnerability. Although clearly marked as 'VULNERABLE!', an AI agent could potentially extract and utilize this code snippet, leading to XSS attacks if user-controlled input is marked as safe without prior escaping. This could manipulate client-side behavior or exfiltrate user data. Consider removing explicit examples of vulnerable code from the skill, or encapsulate them in a way that makes it impossible for an AI agent to mistakenly use them (e.g., in a dedicated 'anti-patterns' section with strong warnings, or by altering the code to be syntactically incorrect but illustrative). If kept, ensure the warning is extremely prominent and clear. | LLM | SKILL.md:190 |
Scan History
Embed Code
[](https://skillshield.io/report/ce794a35c8d89122)
Powered by SkillShield