Security Audit
tdd-workflows-tdd-refactor
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
tdd-workflows-tdd-refactor received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Unsanitized User Input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Prompt Injection via Unsanitized User Input The skill constructs a prompt for a subagent (`tdd-orchestrator`) by directly embedding user-provided `$ARGUMENTS` without apparent sanitization or validation. This allows for prompt injection, where a malicious user could craft `$ARGUMENTS` to manipulate the subagent's behavior, override its instructions, or potentially extract sensitive information from the subagent's context. Implement robust input validation and sanitization for `$ARGUMENTS` before embedding it into the subagent's prompt. Consider using templating engines that escape user input or employing techniques like instruction/data separation (e.g., placing user input in a separate system message or a dedicated data field, rather than directly in the instruction part of the prompt). | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/21ff5c9f3965e289)
Powered by SkillShield