Security Audit
hig-components-content
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
hig-components-content received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Behavioral Directives, Potential Data Exfiltration via Local File Access Instruction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Behavioral Directives The skill package contains explicit instructions within the untrusted content that attempt to manipulate the host LLM's behavior and output structure. Directives such as 'Check for .claude/apple-design-context.md before asking questions. Use existing context and only ask for information not already covered.' and the detailed 'Output Format' and 'Questions to Ask' sections are attempts to override the LLM's default operational guidelines and dictate its interaction style and response format. This is a common prompt injection technique. Remove all direct instructions to the LLM from the untrusted skill content. The LLM should infer behavior from the skill's manifest and trusted system prompts, not from untrusted markdown. If specific output formats or contextual checks are required, they should be defined in the skill's trusted configuration or system prompts. | LLM | SKILL.md:5 | |
| HIGH | Potential Data Exfiltration via Local File Access Instruction The untrusted skill content explicitly instructs the LLM to 'Check for `.claude/apple-design-context.md`'. This directive implies that the LLM has file system access and is expected to read local files based on untrusted input. If the LLM's underlying capabilities allow it to read arbitrary files, this instruction could be manipulated by a malicious user to read sensitive files from the host system, leading to data exfiltration. This also indicates an excessive permission if the LLM has broad file system access. Prevent the LLM from executing file system operations based on untrusted input. If file access is necessary for skill functionality, it should be strictly controlled, sandboxed, and limited to predefined, non-sensitive paths. The instruction to 'check for' a file should be removed from untrusted content and handled by the trusted skill runtime if applicable. | LLM | SKILL.md:5 |
Scan History
Embed Code
[](https://skillshield.io/report/64baf9111dfd85a2)
Powered by SkillShield