Security Audit
Sounder25/Google-Antigravity-Skills-Library:20_failure_postmortem
github.com/Sounder25/Google-Antigravity-Skills-LibraryTrust Assessment
Sounder25/Google-Antigravity-Skills-Library:20_failure_postmortem received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Markdown Injection leading to Prompt Injection in Log File.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 28, 2026 (commit 09376edc). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Markdown Injection leading to Prompt Injection in Log File The `scripts/log_failure.ps1` script directly interpolates user-provided parameters (`$Command` and `$Error`) into a Markdown log file (`POSTMORTEMS.md`) without sufficient sanitization or escaping.
- The `$Command` parameter is wrapped in inline code block backticks (`` `$Command` ``). An attacker can close these backticks and inject arbitrary Markdown (e.g., `command``# Malicious Heading`).
- The `$Error` parameter is placed within a Markdown blockquote (`> $Error`) but is not otherwise escaped. Markdown syntax within a blockquote is still interpreted, allowing an attacker to inject arbitrary Markdown (e.g., `> Error message\n# New Instruction`).
This allows an attacker to inject arbitrary Markdown syntax, potentially altering the structure of the log file or embedding instructions. If the LLM later reads this log file for analysis, as indicated by the skill's description and output message ("You MUST now analyze the root cause."), an attacker could craft malicious Markdown to inject new instructions or manipulate the LLM's subsequent reasoning, leading to prompt injection. To prevent Markdown injection, both `$Command` and `$Error` variables should be properly escaped or contained. The most robust solution is to wrap the content of both variables within a multi-line Markdown code block (e.g., using triple backticks ```` ``` `$Variable` ``` ````) to ensure they are treated as literal text and not interpreted as Markdown syntax. Alternatively, escape all Markdown special characters within the strings. | LLM | scripts/log_failure.ps1:47 |
Scan History
Embed Code
[](https://skillshield.io/report/659724f0a670bcb7)
Powered by SkillShield