Security Audit
hig-components-layout
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
hig-components-layout received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted instruction to LLM (Prompt Injection).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted instruction to LLM (Prompt Injection) The skill contains an instruction within the untrusted input block that attempts to direct the host LLM's behavior. Specifically, it tells the LLM to 'Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.' This is a prompt injection attempt as it tries to influence the LLM's operational logic from an untrusted source. Remove or rephrase instructions to the LLM from within untrusted content. If such instructions are necessary, they should be part of the trusted skill definition or prompt engineering, not embedded in user-provided or untrusted skill content. | LLM | SKILL.md:3 |
Scan History
Embed Code
[](https://skillshield.io/report/91556cc3c54fbc8e)
Powered by SkillShield