Security Audit
tdd-workflows-tdd-cycle
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
tdd-workflows-tdd-cycle received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted input directly embedded in subagent prompts.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted input directly embedded in subagent prompts The skill directly embeds the user-provided `$ARGUMENTS` into prompts for various subagents (e.g., `comprehensive-review::architect-review`, `unit-testing::test-automator`, `backend-development::backend-architect`). This allows a malicious user to inject arbitrary instructions into the subagent's prompt, potentially overriding its intended behavior, extracting sensitive information (Data Exfiltration), or causing it to generate harmful content or execute unintended commands (Command Injection) if the subagent has such capabilities. Implement robust input sanitization or use a templating mechanism that strictly separates user input from instructions. Consider using a structured input format (e.g., JSON schema) for `$ARGUMENTS` and validating it before embedding. If direct embedding is necessary, ensure the subagent is sandboxed and has minimal permissions, and consider human-in-the-loop verification for critical operations. | LLM | SKILL.md:45 |
Scan History
Embed Code
[](https://skillshield.io/report/ca4feccc380490a4)
Powered by SkillShield