Security Audit
comprehensive-review-full-review
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
comprehensive-review-full-review received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Subagent Prompt Injection via Untrusted Skill Instructions, Untrusted Skill Orchestrates High-Privilege Subagents.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Subagent Prompt Injection via Untrusted Skill Instructions The untrusted skill defines templates for prompts that are passed to various subagents (e.g., `code-reviewer`, `security-auditor`). These templates include placeholders like `$ARGUMENTS`, `{phase1_architecture_context}`, etc. If the values for these placeholders are derived from untrusted user input or other unvalidated sources, an attacker could inject malicious instructions into the subagents. This could cause subagents to deviate from their intended function, perform unauthorized actions (e.g., exfiltrate sensitive data, bypass security checks), or generate misleading reports. The skill itself, being untrusted, dictates the structure of these potentially injectable prompts, creating a second-order prompt injection vulnerability. Implement strict input validation and sanitization for all variables (`$ARGUMENTS`, `{..._context}`) before they are interpolated into subagent prompts. Consider using parameterized prompts or a robust templating engine that strictly separates instructions from data. Ensure subagents operate with the principle of least privilege and have robust guardrails against unexpected instructions. | LLM | SKILL.md:47 | |
| HIGH | Untrusted Skill Orchestrates High-Privilege Subagents The untrusted skill instructs the host LLM to invoke subagents with potentially broad and sensitive capabilities. Examples include `security-auditor` (performing 'secrets detection with GitLeaks') and `cicd-automation::deployment-engineer` (reviewing 'CI/CD pipeline and DevOps practices'). By orchestrating these powerful tools, the untrusted skill effectively gains indirect access to sensitive operations, codebases, and configurations. If combined with prompt injection (SS-LLM-001), an attacker could leverage these permissions to perform unauthorized security scans, access CI/CD secrets, manipulate deployment processes, or cause other significant harm. Review and restrict the capabilities of subagents, ensuring they operate with the principle of least privilege. Implement strict access controls and authorization checks before allowing an untrusted skill to invoke sensitive subagents. Isolate subagent execution environments and ensure that the host LLM validates the intent and scope of subagent calls initiated by untrusted skills. | LLM | SKILL.md:63 |
Scan History
Embed Code
[](https://skillshield.io/report/2a08a34db32ef590)
Powered by SkillShield