Security Audit
incident-responder
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
incident-responder received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted content attempts to instruct LLM to open a file.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Untrusted content attempts to instruct LLM to open a file The skill's instructions, located within the untrusted input block, contain a direct command for the LLM to 'open `resources/implementation-playbook.md`'. This constitutes a prompt injection attempt, as it tries to make the LLM perform an action (file access) based on untrusted input. If the LLM has file system access capabilities, this pattern could be exploited to read or potentially execute arbitrary files, depending on the LLM's environment and permissions. While the target file is within the skill's own package, the direct instruction from untrusted content is a security risk. Remove direct commands to the LLM from untrusted content. If file access is intended, it should be mediated through a trusted tool call with strict validation and sandboxing, not a direct instruction within the skill's prompt. | LLM | SKILL.md:18 |
Scan History
Embed Code
[](https://skillshield.io/report/f6c0bd9483756ddb)
Powered by SkillShield