Security Audit
seo-snippet-hunter
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
seo-snippet-hunter received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 2 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted content attempts to define LLM persona and behavior, Untrusted content instructs LLM to open a file.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content attempts to define LLM persona and behavior The skill's `SKILL.md` file, which is treated as untrusted input, contains direct instructions intended to manipulate the host LLM's persona and operational guidelines. This includes assigning a role ("You are a featured snippet optimization specialist...") and dictating its focus and formatting behavior ("Focus on clear, direct answers. Format content to maximize featured snippet eligibility."). Such instructions from untrusted sources can lead to prompt injection, where the LLM's behavior is subverted. Remove all direct instructions to the LLM from the untrusted `SKILL.md` content. If the skill requires specific persona or behavioral guidelines, these should be defined in trusted configuration files or the skill's trusted code, not within user-editable or untrusted markdown. | LLM | SKILL.md:20 | |
| CRITICAL | Untrusted content instructs LLM to open a file The skill's `SKILL.md` file, treated as untrusted input, contains an instruction for the LLM to "open `resources/implementation-playbook.md`" under certain conditions. This is a direct attempt to inject a command or tool-use instruction into the LLM's execution flow from an untrusted source. If the LLM has file system access, this could be exploited to read arbitrary files or trigger unintended actions, potentially leading to data exfiltration or further command injection. Remove all direct instructions for tool usage or file access from the untrusted `SKILL.md` content. Tool calls should be explicitly defined and controlled within the trusted skill code or framework, not triggered by untrusted markdown instructions. | LLM | SKILL.md:17 |
Scan History
Embed Code
[](https://skillshield.io/report/f5e3def9936bc88c)
Powered by SkillShield