Security Audit
backend-development-feature-development
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
backend-development-feature-development received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Dynamic Prompt Construction for Subagents.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Dynamic Prompt Construction for Subagents The skill constructs prompts for various subagents (e.g., 'business-analytics::business-analyst', 'security-scanning::security-auditor', 'backend-architect') using dynamic content derived from user input ('$ARGUMENTS') and outputs from previous subagent steps (e.g., '[include business analysis from step 1]'). If '$ARGUMENTS' or the output from an earlier subagent contains malicious instructions, these can be injected into the subsequent subagent's prompt. This could lead to the subagent deviating from its intended task, performing unauthorized actions (especially concerning high-privilege subagents like 'security-scanning::security-auditor'), or exfiltrating sensitive information. Implement robust input sanitization and validation for '$ARGUMENTS' and all intermediate outputs before they are used to construct prompts for subagents. Consider using structured outputs (e.g., JSON schema validation) for subagent responses to prevent arbitrary text from being re-injected. Employ prompt templating libraries that offer better separation of instructions and dynamic data. Implement guardrails or content moderation on subagent inputs/outputs. For high-privilege subagents like 'security-scanning::security-auditor', ensure strict sandboxing and least-privilege access controls are in place, and consider human-in-the-loop approval for sensitive operations. | LLM | SKILL.md:60 |
Scan History
Embed Code
[](https://skillshield.io/report/2f19c68ad5e00b6a)
Powered by SkillShield