Security Audit
incident-response-incident-response
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
incident-response-incident-response received a trust score of 81/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Prompt Injection via user-controlled arguments in sub-agent prompts, Direct instruction to host LLM for local file access.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Prompt Injection via user-controlled arguments in sub-agent prompts The skill constructs prompts for various sub-agents by interpolating the `$ARGUMENTS` variable. If `$ARGUMENTS` contains user-controlled input without proper sanitization, a malicious user could inject instructions into the sub-agent's prompt. This could cause the sub-agent to deviate from its intended task, reveal sensitive information from connected systems (e.g., monitoring, logs, security tools), or perform unauthorized actions through its assigned tools and permissions. Implement robust input sanitization and validation for `$ARGUMENTS` before it is interpolated into sub-agent prompts. Consider using structured inputs or dedicated parameters instead of free-form text for sensitive operations. Ensure sub-agents operate with the principle of least privilege, limiting their capabilities to only what is strictly necessary for their defined tasks. | LLM | SKILL.md:47 | |
| MEDIUM | Direct instruction to host LLM for local file access The skill contains a direct instruction to the host LLM, `open `resources/implementation-playbook.md``, within the untrusted content. This attempts to manipulate the host LLM into performing a file system operation. If the host LLM is configured to execute such commands from skill descriptions, it could lead to unauthorized file access, potential data exfiltration if the file contains sensitive information, or unexpected behavior. This also implies an excessive permission request for file system access. Remove direct instructions to the host LLM from skill descriptions. If file access is genuinely required, it should be mediated through a secure, explicit tool call with proper access controls and user consent, rather than an implicit instruction. Ensure the host LLM does not execute arbitrary file system commands from untrusted skill content. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/ca5a100bb64eda34)
Powered by SkillShield