Security Audit
Jamkris/everything-gemini-code:skills/autonomous-loops
github.com/Jamkris/everything-gemini-codeTrust Assessment
Jamkris/everything-gemini-code:skills/autonomous-loops received a trust score of 0/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 2 critical, 2 high, 1 medium, and 0 low severity. Key findings include File read + network send exfiltration, Sensitive path access: AI agent config, `continuous-claude` tool allows command injection via prompt arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on March 30, 2026 (commit 6c6f43aa). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/autonomous-loops/SKILL.md:118 | |
| CRITICAL | `continuous-claude` tool allows command injection via prompt arguments The 'Continuous Claude PR Loop' section demonstrates the `continuous-claude` tool using `--review-prompt "Run npm test && npm run lint, fix any failures"`. This explicitly shows shell commands (`npm test && npm run lint`) being passed as part of a prompt argument. This indicates that the `continuous-claude` tool is designed to execute arbitrary shell commands provided within its prompt arguments. If an attacker can control the content of these prompts (e.g., through a malicious configuration file, environment variable, or a preceding agent's output), they can achieve arbitrary command execution on the host system. The `continuous-claude` tool should strictly separate instructions for the LLM from commands intended for shell execution. If shell execution is necessary, it should be done through a dedicated, sandboxed mechanism with strict input validation and whitelisting of allowed commands and arguments, rather than directly interpreting parts of the LLM prompt as shell commands. | LLM | SKILL.md:290 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/autonomous-loops/SKILL.md:118 | |
| HIGH | Claude command `infinite.md` may allow command injection via `output_dir` and `Task tool` The `infinite.md` Claude command, defined in the 'Infinite Agentic Loop' section, includes the instruction `List output_dir`. If the `List` operation is implemented by the underlying `claude` tool as a direct shell command execution, then the `output_dir` argument (which can be controlled by the user invoking the Claude command) could be used for command injection (e.g., `output_dir=".; rm -rf /"`). Similarly, the `Deploy sub-agents in parallel (Task tool)` instruction implies the `Task tool` might execute commands derived from user-controlled input (like `spec_file` content or arguments passed to sub-agents). This creates a potential path for arbitrary command execution if inputs are not sanitized. Ensure that arguments like `output_dir` are properly sanitized or validated before being used in any shell command execution. If `List` and `Task tool` are intended to be internal `claude` operations, clarify that they do not directly execute shell commands based on unsanitized input. Implement robust input validation for `spec_file` content and all arguments passed to sub-agents. | LLM | SKILL.md:160 | |
| MEDIUM | Agentic loops involve broad file system and process access, posing data exfiltration and excessive permission risks The 'Infinite Agentic Loop', 'Continuous Claude PR Loop', and 'Ralphinho / RFC-Driven DAG Orchestration' patterns describe highly autonomous systems that require extensive permissions, including reading/writing to the file system (e.g., `output_dir`, `SHARED_TASK_NOTES.md`, `SQLite` persistence), executing shell commands (`npm test`, `gh pr create`), and interacting with external services (GitHub). While these permissions are necessary for their functionality, the document does not sufficiently warn about the security implications of feeding untrusted input (e.g., malicious `spec_file`, `RFC/PRD Document`, or `--prompt` content) into these systems. Such inputs could lead to unauthorized data exfiltration, modification, or denial of service due to the broad access granted to the agents. Add explicit security warnings about the dangers of running these autonomous loops with untrusted or partially trusted inputs. Recommend running agents in isolated, sandboxed environments with minimal necessary permissions. Emphasize the importance of input validation and sanitization for all data that influences agent behavior or commands. Clearly document what data is persisted and where. | LLM | SKILL.md:150 |
Scan History
Embed Code
[](https://skillshield.io/report/f48f4dae741a833c)
Powered by SkillShield