Security Audit
Jamkris/everything-gemini-code:skills/continuous-learning-v2
github.com/Jamkris/everything-gemini-codeTrust Assessment
Jamkris/everything-gemini-code:skills/continuous-learning-v2 received a trust score of 0/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 2 critical, 1 high, 3 medium, and 0 low severity. Key findings include Suspicious import: urllib.request, Sensitive environment variable access: $HOME, Command Injection via Python Here-doc Interpolation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 4/100, indicating areas for improvement.
Last analyzed on March 30, 2026 (commit 6c6f43aa). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via Python Here-doc Interpolation The `hooks/observe.sh` script directly interpolates the `$INPUT_JSON` variable, which contains untrusted data from the Gemini CLI hook, into a Python here-doc string literal (`json.loads('''$INPUT_JSON''')`). A malicious payload in `$INPUT_JSON` can be crafted to break out of the Python string literal and execute arbitrary Python code on the host system. This allows for arbitrary command execution with the privileges of the user running the Gemini CLI. Sanitize or properly escape `$INPUT_JSON` before interpolating it into the Python here-doc. A safer approach would be to pass `$INPUT_JSON` as a command-line argument to the Python script or via a temporary file, and then load it using `json.loads()` within the Python script itself, without direct shell interpolation. | Static | hooks/observe.sh:59 | |
| CRITICAL | Persistent Prompt Injection via Untrusted Observations The `agents/start-observer.sh` script invokes the `gemini` command with a prompt that instructs the LLM to read and analyze the `observations.jsonl` file. The `hooks/observe.sh` script populates `observations.jsonl` with `tool_input` and `tool_output` from various tools, which are untrusted and can be controlled by an attacker. This creates a persistent prompt injection vector: a malicious `tool_input` or `tool_output` can embed instructions that, when read by the LLM, manipulate its behavior. The LLM is explicitly instructed to 'create an instinct file in $CONFIG_DIR/instincts/personal/', allowing an attacker to generate malicious instincts that persistently influence the LLM's future actions or exfiltrate data. Implement robust sanitization or filtering of `tool_input` and `tool_output` before writing them to `observations.jsonl`. When instructing the LLM to read untrusted data, use techniques like XML/JSON tags or specific delimiters to clearly separate instructions from data, and instruct the LLM to treat content within data tags as literal data, not instructions. Consider using a separate, sandboxed LLM for analyzing untrusted observations or strictly limiting its capabilities to prevent self-modification or data exfiltration. | Static | agents/start-observer.sh:86 | |
| HIGH | Supply Chain Risk / Data Exfiltration via Instinct Import from URL The `scripts/instinct-cli.py` script's `cmd_import` function allows users to import instincts from a user-provided URL. This enables the script to download arbitrary content from external sources and store it locally as an 'instinct' in `~/.gemini/homunculus/instincts/inherited/`. A malicious URL could point to a server controlled by an attacker, allowing them to inject harmful 'instincts' that contain prompt injection payloads or instructions designed to manipulate the LLM's behavior. While the script itself doesn't directly exfiltrate local data, the ability to fetch arbitrary external content introduces a significant supply chain risk and could be used as a channel for data exfiltration if the imported instincts later instruct the LLM to send data to an external endpoint. Implement strict validation for URLs, allowing only trusted domains or requiring explicit user confirmation for external sources. Consider sandboxing the import process. Ensure that imported instincts are thoroughly scanned for malicious patterns before being stored or used by the LLM. The LLM should be instructed to treat imported instincts with lower confidence or require human review before applying them. | Static | scripts/instinct-cli.py:169 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/continuous-learning-v2/scripts/instinct-cli.py:17 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/continuous-learning-v2/agents/start-observer.sh:14 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/continuous-learning-v2/hooks/observe.sh:39 |
Scan History
Embed Code
[](https://skillshield.io/report/161aa5d1af573fd3)
Powered by SkillShield