Security Audit
python-packaging
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
python-packaging received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted skill attempts to manipulate host LLM instructions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted skill attempts to manipulate host LLM instructions The `SKILL.md` file, which is explicitly marked as untrusted input, contains direct instructions intended for the host LLM. This includes general behavioral directives such as 'Clarify goals' and 'Apply relevant best practices', as well as a specific command to 'open `resources/implementation-playbook.md`'. This constitutes a prompt injection attempt, as untrusted content should never dictate the LLM's actions or internal processes. The host LLM must be hardened to strictly ignore any instructions or commands found within untrusted input delimiters. The skill author should revise the `SKILL.md` to remove direct instructions to the LLM. Any guidance for the LLM should be provided outside the untrusted content, or framed as information for the user rather than commands for the AI. | LLM | skills/python-packaging/SKILL.md:23 |
Scan History
Embed Code
[](https://skillshield.io/report/061d3d4b8621ef2b)
Powered by SkillShield