Security Audit
Sounder25/Google-Antigravity-Skills-Library:11_llmstxt_doc_parsing
github.com/Sounder25/Google-Antigravity-Skills-LibraryTrust Assessment
Sounder25/Google-Antigravity-Skills-Library:11_llmstxt_doc_parsing received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include Untrusted External Content Fed Directly to LLM, Arbitrary File Write via Unsanitized Output Directory, Potential Command Injection via Unsanitized URL Parameter.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 18/100, indicating areas for improvement.
Last analyzed on February 28, 2026 (commit 09376edc). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted External Content Fed Directly to LLM The skill fetches documentation from an external, untrusted URL and explicitly states in its implementation that it 'Prompts the agent to read this single file' (`CONSOLIDATED_KNOWLEDGE.md`), which is generated from this external content. An attacker controlling the `llms.txt` file or any linked markdown files can inject arbitrary instructions or malicious prompts into the LLM's context, leading to prompt injection and potential compromise of the agent's behavior. Implement strict sanitization and validation of all fetched content before it is presented to the LLM. Consider running the LLM in a sandboxed environment with limited capabilities when processing untrusted external content to mitigate the impact of prompt injection. | LLM | SKILL.md:64 | |
| CRITICAL | Arbitrary File Write via Unsanitized Output Directory The `--output-dir` parameter allows users to specify where ingested documentation should be saved. If this parameter is not properly sanitized within the `fetch_docs.ps1` script, an attacker could use path traversal sequences (e.g., `../../`) or absolute paths to write files to arbitrary locations on the host system. This could lead to system compromise, data corruption, or the execution of malicious files. Implement robust validation and sanitization for the `--output-dir` parameter in `fetch_docs.ps1` to prevent path traversal and restrict file writes to a designated, sandboxed directory. Ensure that only relative paths within the allowed output directory are accepted. | LLM | SKILL.md:20 | |
| HIGH | Potential Command Injection via Unsanitized URL Parameter The `--url` parameter is passed to the `fetch_docs.ps1` script, which then uses it to fetch content. If the URL is not properly validated and escaped before being used in PowerShell commands (e.g., `Invoke-WebRequest`), an attacker could inject arbitrary commands into the PowerShell script's execution context, leading to remote code execution. This risk also extends to URLs parsed from the `llms.txt` file, which are subsequently fetched. Ensure all URL inputs (both the primary `--url` and any URLs parsed from `llms.txt`) are strictly validated against a whitelist of allowed schemes and characters, and properly escaped before being used in any shell commands within `fetch_docs.ps1`. Avoid direct concatenation of user input into shell commands. | LLM | SKILL.md:19 | |
| MEDIUM | Inherent Supply Chain Risk from Untrusted External Content Ingestion The skill's core functionality involves ingesting documentation from arbitrary external URLs. This introduces an inherent supply chain risk, as the agent relies on the integrity and trustworthiness of third-party documentation sites. A compromised or malicious documentation site could serve harmful content, leading to prompt injection, data exfiltration (if the LLM is tricked into revealing information), or other attacks, even with input sanitization. This is a design-level risk. Implement robust content filtering, reputation checks for URLs, and clear user warnings about the source of ingested information. Consider sandboxing the LLM's capabilities when processing content from untrusted sources to limit potential damage. | LLM | SKILL.md:6 |
Scan History
Embed Code
[](https://skillshield.io/report/788eb554689251e6)
Powered by SkillShield