Security Audit
incident-runbook-templates
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
incident-runbook-templates received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 3 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Kubernetes Command Injection via `kubectl`, Potential Database Command Injection via `psql`, Potential Arbitrary Local Script Execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Kubernetes Command Injection via `kubectl` The skill contains numerous `kubectl` commands intended for managing a Kubernetes cluster, including `kubectl rollout undo`, `kubectl scale`, `kubectl set env`, and `kubectl apply`. If an LLM were to interpret and execute these commands, it could lead to unauthorized modification of production infrastructure, denial of service, or the deployment of malicious configurations (especially via `kubectl apply -f - <<EOF`). This represents a severe command injection vulnerability. Explicitly state that these commands are illustrative examples for human operators and not intended for direct execution by an automated agent. Implement strict sandboxing and whitelisting for any code execution environment used by the LLM, preventing direct execution of arbitrary `kubectl` commands. | LLM | SKILL.md:91 | |
| CRITICAL | Potential Database Command Injection via `psql` The skill includes `psql` commands designed for direct database interaction, such as querying active connections, terminating backend processes (`pg_terminate_backend`), and performing maintenance operations (`VACUUM FULL`). Execution of these commands by an LLM could result in data exfiltration, denial of service, or data corruption. The use of environment variables like `$DB_HOST` and `$DB_USER` further highlights the potential for credential exposure if the LLM's environment is not secure. Clearly mark database commands as examples for human use. Ensure that the LLM's execution environment does not have direct access to production databases or sensitive credentials. Implement a secure, mediated interface for any database interactions required by the LLM. | LLM | SKILL.md:105 | |
| CRITICAL | Potential Arbitrary Local Script Execution The skill suggests executing local shell scripts like `./scripts/smoke-test-payments.sh` and `./scripts/db-rollback.sh`. If an LLM were to execute these, it could lead to arbitrary code execution on the host system, potentially compromising the environment, exfiltrating data, or performing unauthorized actions. The content of these scripts is unknown, posing an unquantifiable risk. Remove direct references to executing local scripts or provide a clear warning that these are placeholders for human-initiated actions. The LLM's execution environment must strictly disallow arbitrary script execution from untrusted skill content. | LLM | SKILL.md:173 | |
| HIGH | Potential Data Exfiltration via Network Requests and Log Access The skill includes `curl` commands that make requests to various internal and external endpoints (e.g., Prometheus, Sentry, Stripe, internal APIs) and `kubectl logs` commands to access application logs. If an LLM were to execute these, it could inadvertently send sensitive data to external services, expose internal application logs containing PII or secrets, or retrieve confidential information from internal systems. Implement strict network egress filtering and whitelisting for any LLM-initiated requests. Ensure that LLMs do not have direct access to sensitive logs or internal APIs without proper sanitization and authorization layers. Clearly delineate that these are examples for human operators. | LLM | SKILL.md:66 |
Scan History
Embed Code
[](https://skillshield.io/report/65886886f90d41ba)
Powered by SkillShield