Security Audit
hybrid-search-implementation
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
hybrid-search-implementation received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted instructions manipulate host LLM behavior.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted instructions manipulate host LLM behavior The skill's `SKILL.md` file contains direct instructions intended for the host LLM, such as 'Clarify goals...', 'Apply relevant best practices...', 'Provide actionable steps...', and critically, 'open `resources/implementation-playbook.md`'. These directives are embedded within untrusted content delimiters, indicating an attempt to manipulate the LLM's execution flow and potentially access local resources based on untrusted input. The instruction to 'open' a file is a clear attempt to direct the LLM's actions. Remove all direct instructions intended for the host LLM from within the untrusted content. Skill descriptions should inform the user about the skill's purpose, not command the LLM. If the skill needs to interact with files, it should do so through explicitly defined and permissioned tools, not via direct LLM instructions embedded in untrusted markdown. | LLM | skills/hybrid-search-implementation/SKILL.md:19 |
Scan History
Embed Code
[](https://skillshield.io/report/689a4990ce66e8a2)
Powered by SkillShield