Security Audit
Sounder25/Google-Antigravity-Skills-Library:21_skill_gap_identifier
github.com/Sounder25/Google-Antigravity-Skills-LibraryTrust Assessment
Sounder25/Google-Antigravity-Skills-Library:21_skill_gap_identifier received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Stored Prompt Injection via Skill Template.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 28, 2026 (commit 09376edc). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Stored Prompt Injection via Skill Template User-provided inputs for `Name`, `Trigger`, and `Description` are directly embedded into the generated `SKILL.md` template without LLM-specific sanitization. If an LLM later processes this `SKILL.md` (e.g., to understand the newly created skill), an attacker could inject malicious instructions into these fields during skill creation, leading to prompt injection when the LLM reads the generated documentation. Implement LLM-specific sanitization or escaping for user-provided inputs (`Name`, `Trigger`, `Description`) before embedding them into the `SKILL.md` template. This could involve techniques like adding XML-style tags, specific escape sequences, or filtering keywords that LLMs might interpret as instructions. Alternatively, ensure that LLMs consuming these generated `SKILL.md` files are robust against prompt injection. | LLM | scripts/propose_skill.ps1:60 |
Scan History
Embed Code
[](https://skillshield.io/report/79dff4a05a4ad72c)
Powered by SkillShield