Security Audit
python-performance-optimization
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
python-performance-optimization received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted instructions attempting to manipulate LLM behavior.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted instructions attempting to manipulate LLM behavior The `SKILL.md` file, which is entirely marked as untrusted input, contains direct instructions for the host LLM under the 'Instructions' section. These instructions attempt to guide the LLM's response generation and actions (e.g., 'Clarify goals', 'Apply relevant best practices', 'open `resources/implementation-playbook.md`'), which constitutes a prompt injection attempt from untrusted content. This violates the principle that content within untrusted delimiters should not be treated as instructions. Move the 'Instructions' section and any other direct commands for the LLM out of the untrusted input block. LLM instructions should only be provided in trusted parts of the skill definition or system prompt. If the content must remain untrusted, rephrase it as descriptive text rather than direct commands. | LLM | skills/python-performance-optimization/SKILL.md:27 |
Scan History
Embed Code
[](https://skillshield.io/report/33b5b85c7c786eb4)
Powered by SkillShield