Security Audit
Sounder25/Google-Antigravity-Skills-Library:10_async_feedback
github.com/Sounder25/Google-Antigravity-Skills-LibraryTrust Assessment
Sounder25/Google-Antigravity-Skills-Library:10_async_feedback received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Arbitrary file read via `--file` parameter, Potential for prompt injection via feedback file.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 28, 2026 (commit 09376edc). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary file read via `--file` parameter The skill defines an input parameter `--file` which allows specifying an arbitrary file path. If the underlying script (`check_feedback.ps1`, which is not provided) reads the content of this user-controlled file and makes it accessible to the agent (e.g., by printing to console or passing to the LLM), it creates a direct path for data exfiltration. An attacker could use this to read sensitive files from the agent's environment (e.g., `/etc/passwd`, configuration files, private keys, or other user data). Restrict the `--file` parameter to a specific, isolated directory (e.g., a `feedback/` subdirectory) or implement strict path validation to prevent traversal and access to sensitive system locations. Alternatively, remove the ability to specify an arbitrary file path and hardcode `FEEDBACK.md` in a known safe, agent-specific location. | LLM | SKILL.md:20 | |
| MEDIUM | Potential for prompt injection via feedback file The skill's core function is to read "new instructions" from a user-controlled `FEEDBACK.md` file and allow the agent to "pivot immediately." If the content of this feedback file is directly incorporated into the LLM's prompt or context without proper sanitization, validation, or clear separation from system instructions, it can be exploited for prompt injection. An attacker could insert malicious instructions into `FEEDBACK.md` to manipulate the agent's behavior, override its goals, or extract information. Implement robust sanitization and validation of feedback content before passing it to the LLM. Clearly delineate user feedback from system instructions in the prompt using strong delimiters. Consider using a separate, sandboxed LLM call for processing feedback or employing techniques like instruction-following models that are less susceptible to prompt injection. | LLM | SKILL.md:7 |
Scan History
Embed Code
[](https://skillshield.io/report/8fb84496204a52da)
Powered by SkillShield