Security Audit
Sounder25/Google-Antigravity-Skills-Library:19_adversarial_reviewer
github.com/Sounder25/Google-Antigravity-Skills-LibraryTrust Assessment
Sounder25/Google-Antigravity-Skills-Library:19_adversarial_reviewer received a trust score of 57/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via User-Controlled File Content, Data Exfiltration via User-Controlled File Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 28, 2026 (commit 09376edc). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User-Controlled File Content The skill reads the content of a user-specified file (`$FilePath`) and directly embeds it into a prompt that is then fed to an LLM. If a malicious user provides a file containing prompt injection instructions, these instructions will be passed to the LLM, potentially overriding its system instructions or causing it to perform unintended actions. The skill explicitly states that the agent 'must then "simulate" the adversary by responding to this prompt,' confirming the output is intended for an LLM. Implement robust sanitization or escaping of user-provided file content before embedding it into an LLM prompt. Consider using a dedicated 'code block' or 'data' instruction for the LLM to clearly delineate user-provided content from instructions. Alternatively, restrict the types or locations of files that can be reviewed to trusted sources only. | LLM | scripts/prepare_review.ps1:26 | |
| HIGH | Data Exfiltration via User-Controlled File Path The skill uses `Get-Content -Path $FilePath -Raw` to read the entire content of a file specified by the user. This content is then included in the output prompt. An attacker could provide a path to a sensitive file (e.g., configuration files, credential files, system files like `/etc/passwd`) and the skill would read and output its content. If the agent's subsequent response is observable by the attacker, this constitutes a direct data exfiltration vector. Restrict the `FilePath` parameter to only allow access to specific, non-sensitive directories or file types. Implement a whitelist of allowed file extensions or paths. Avoid reading arbitrary file content from user-controlled paths. If reading arbitrary files is necessary, ensure the output is not directly exposed to an untrusted party or logged in an insecure manner. | Static | scripts/prepare_review.ps1:13 |
Scan History
Embed Code
[](https://skillshield.io/report/c3db6f1039db39b8)
Powered by SkillShield