Security Audit
security-scanning-security-hardening
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
security-scanning-security-hardening received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted input directly embedded in sub-agent prompts, Skill requests highly privileged and potentially destructive actions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted input directly embedded in sub-agent prompts The skill embeds the `$ARGUMENTS` placeholder directly into prompts for various sub-agents without apparent sanitization or validation. If `$ARGUMENTS` originates from an untrusted source, it can be used to inject malicious instructions into the sub-agents, potentially leading to arbitrary code execution, data exfiltration, or other unauthorized actions by the sub-agents. This vulnerability affects all `Task tool` calls that utilize `$ARGUMENTS` for defining the target of security operations. Implement robust input validation and sanitization for `$ARGUMENTS` before embedding it into sub-agent prompts. Consider using a templating engine with strict auto-escaping or passing `$ARGUMENTS` as a structured parameter rather than direct string interpolation if the sub-agent supports it. Ensure the orchestrating LLM or framework provides mechanisms to prevent prompt injection into sub-agents. | LLM | SKILL.md:45 | |
| HIGH | Skill requests highly privileged and potentially destructive actions The skill's core functionality involves instructing sub-agents to perform actions such as 'Coordinate immediate remediation of critical vulnerabilities' (implying code changes), 'Implement comprehensive backend security controls' (code/config changes), 'Deploy infrastructure security controls' (infrastructure changes), 'Implement enterprise secrets management' (access to sensitive credentials), and 'Execute comprehensive penetration testing' (potentially intrusive and destructive actions). While the skill itself doesn't possess these permissions, it orchestrates sub-agents to perform them. If the execution environment or the sub-agents lack proper authorization checks, human approval gates, or sandboxing, these requests could lead to unauthorized modifications, data loss, or system compromise. Ensure that the execution environment for this skill and its sub-agents enforces strict authorization and access control. Implement human approval workflows for any actions that modify code, infrastructure, or sensitive configurations. Sub-agents should operate within the principle of least privilege and be sandboxed where possible. Provide clear warnings to users about the scope and potential impact of executing this skill. | LLM | SKILL.md:60 |
Scan History
Embed Code
[](https://skillshield.io/report/33d646776baa9df0)
Powered by SkillShield