Security Audit
data-engineering-data-driven-feature
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
data-engineering-data-driven-feature received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted user input directly embedded into subagent prompts.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted user input directly embedded into subagent prompts The skill's definition, which is entirely untrusted, specifies a series of 'Task tool' calls where the user-provided `$ARGUMENTS` placeholder is directly concatenated into the `Prompt` for various subagents. This pattern is repeated across multiple phases of the workflow. This direct embedding allows a malicious user to inject arbitrary instructions into the subagent's prompt, potentially leading to unauthorized actions, data exfiltration, or manipulation of the subagent's intended behavior. The subagents are described with broad capabilities (e.g., 'data-scientist', 'backend-architect', 'data-engineer'), increasing the potential impact of such an injection. Implement robust input sanitization and validation for `$ARGUMENTS` before it is embedded into any subagent prompt. Consider using templating engines with strict auto-escaping or passing user input as structured data (e.g., JSON objects) rather than directly concatenating it into natural language prompts. Ensure the host LLM or subagents are sandboxed and have minimal necessary permissions to mitigate the impact of any successful injection. | LLM | SKILL.md:36 |
Scan History
Embed Code
[](https://skillshield.io/report/b73fd43d5781d4b1)
Powered by SkillShield