Security Audit
subagent-driven-development
github.com/guanyang/antigravity-skillsTrust Assessment
subagent-driven-development received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Potential Prompt Injection via Untrusted Plan Content to Subagents, Exposure of Sensitive Data from External Plan File, Implicit Requirement for Broad Filesystem and Git Permissions.
The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 15, 2026 (commit 3e75fabd). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Prompt Injection via Untrusted Plan Content to Subagents The skill instructs the controller LLM to read an external plan file (`docs/plans/feature-plan.md`) and use its full text and context to dispatch subagents. There is no explicit mention of sanitization or validation of this untrusted content before it is passed as part of the prompts to the subagents. A malicious or crafted plan file could inject instructions into the subagents, leading to unintended actions or information disclosure. Implement robust input validation and sanitization for the content read from `docs/plans/feature-plan.md` before it is used to construct prompts for subagents. Consider using a dedicated parsing mechanism that extracts only the intended task descriptions, rather than passing raw text. | Unknown | SKILL.md:87 | |
| MEDIUM | Exposure of Sensitive Data from External Plan File The skill explicitly instructs the LLM to read the content of `docs/plans/feature-plan.md`. If this file contains sensitive information (e.g., API keys, internal project details, personal data), that information will be loaded into the LLM's context and potentially exposed to subagents or logged, increasing the risk of data leakage. Ensure that `docs/plans/feature-plan.md` (and any other files read by the skill) does not contain sensitive information. Implement mechanisms to redact or filter sensitive data if it must be present in such files, or use secure methods for accessing credentials (e.g., environment variables, secret management services) instead of embedding them in plan files. | Unknown | SKILL.md:87 | |
| MEDIUM | Implicit Requirement for Broad Filesystem and Git Permissions The skill orchestrates a complex development workflow involving 'implementer subagent' actions like 'implements, tests, commits, self-reviews' and 'finishing-a-development-branch'. This implies that the LLM and its subagents will require extensive permissions to read/write files, execute tests, and perform powerful Git operations (e.g., creating worktrees, committing, merging, pushing). While the skill itself doesn't grant these permissions, it relies on their availability. If the execution environment is not properly sandboxed or if user consent for such broad actions is not explicitly obtained and verified at each critical step, these permissions could be exploited by a malicious plan or subagent. Implement strict sandboxing for subagent execution environments. Ensure that all critical actions requiring broad permissions (e.g., committing code, merging branches, modifying system files) require explicit user confirmation or are performed within a highly restricted and monitored context. Clearly document the required permissions for users of this skill. | Unknown | SKILL.md:50 |
Scan History
Embed Code
[](https://skillshield.io/report/c370a3a1dfd29462)
Powered by SkillShield