Trust Assessment
hippocampus-memory received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 18 findings: 3 critical, 2 high, 13 medium, and 0 low severity. Key findings include Sensitive environment variable access: $HOME, Command Injection via LLM Instructions in Cron Jobs, Command Injection via User-Controlled Argument in install.sh.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings18
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via LLM Instructions in Cron Jobs The `install.sh` script sets up cron jobs using `openclaw cron add`. The `--agent-turn` arguments for these cron jobs contain explicit instructions for the LLM to execute shell commands and perform file operations. For example, the `hippocampus-encoding` cron job instructs the LLM to 'Run the encoding pipeline' with a specific bash command, and to 'Delete pending-memories.json' and 'Sync core'. This delegates critical system operations to the LLM, creating a direct command injection vulnerability if the LLM can be manipulated by untrusted input. Avoid instructing the LLM to directly execute shell commands or perform sensitive file operations. Instead, design the agent to call specific, sandboxed tools for these actions, or have the cron job execute the scripts directly without LLM mediation. If LLM interaction is necessary, ensure all commands are pre-defined and validated, and user input cannot influence the command string. | LLM | install.sh:130 | |
| CRITICAL | Command Injection via User-Controlled Argument in install.sh The `install.sh` script directly echoes the user-controlled `SIGNAL_LIMIT` variable into a file (`$WORKSPACE/memory/.signal-limit`) without sanitization. If an attacker provides a malicious value for `--signals` (e.g., `--signals '100; rm -rf /'`), the `echo` command will execute the injected shell commands. Sanitize or validate user input before using it in shell commands or file operations. For this specific case, ensure `SIGNAL_LIMIT` is a valid integer or 'whole' before writing it to the file. Consider using `printf %s "$SIGNAL_LIMIT"` or a dedicated file writing utility that doesn't interpret shell metacharacters. | LLM | install.sh:86 | |
| CRITICAL | Command Injection via User-Controlled Query in recall.sh The `recall.sh` script directly interpolates the user-controlled `$QUERY` variable into a Python script string. If `$QUERY` contains characters that can escape the Python string literal (e.g., `"` or `\`), an attacker can inject and execute arbitrary Python code. For example, `recall.sh "foo\" + import os; os.system('rm -rf /') + \"bar"` could lead to arbitrary command execution. Avoid direct string interpolation of user-controlled input into executable code. Pass arguments to Python scripts via `sys.argv` or environment variables, and ensure they are properly parsed and validated within the Python script. If string interpolation is unavoidable, rigorously escape all potentially malicious characters. | LLM | scripts/recall.sh:50 | |
| HIGH | Prompt Injection via LLM Instructions in summarize-pending.sh The `summarize-pending.sh` script uses a HEREDOC to provide explicit instructions to the LLM. These instructions include directives to perform file operations ('delete PENDING_FILE', 'Write updated index back') and execute shell commands ('run sync-core.sh', 'run this script again', 'Run: ~/.openclaw/workspace/skills/hippocampus/scripts/decay.sh', 'Run the encoding pipeline: ...'). This design makes the LLM susceptible to prompt injection, where malicious input could manipulate the LLM into executing unintended commands or altering its behavior. Refactor the LLM interaction to use a more structured and controlled interface for tool use and command execution. Avoid free-form instructions that allow the LLM to generate and execute arbitrary commands. Implement strict validation and allow-listing for any commands the LLM is permitted to invoke. | LLM | scripts/summarize-pending.sh:130 | |
| HIGH | Data Exfiltration (Local Exposure) via Avatar Path in Dashboard The `generate-dashboard.sh` script reads the `AVATAR_PATH` from `IDENTITY.md`. If an attacker can control the content of `IDENTITY.md` to point `AVATAR_PATH` to a sensitive local file (e.g., `/etc/passwd`), the script will base64 encode the content of that file and embed it directly into the generated `brain-dashboard.html`. While not exfiltrated remotely by this script, it exposes sensitive local file content in a user-viewable HTML file, which could then be exfiltrated by other means or viewed directly. Validate `AVATAR_PATH` to ensure it points to an allowed location (e.g., within the workspace or a designated media directory) and is of an expected file type. Implement strict path sanitization and restrict file access to prevent reading arbitrary system files. | LLM | scripts/generate-dashboard.sh:56 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/impkind/hippocampus-memory/install.sh:21 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/impkind/hippocampus-memory/scripts/consolidate.sh:8 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/impkind/hippocampus-memory/scripts/decay.sh:11 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/impkind/hippocampus-memory/scripts/encode-pipeline.sh:17 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/impkind/hippocampus-memory/scripts/generate-dashboard.sh:7 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/impkind/hippocampus-memory/scripts/load-core.sh:11 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/impkind/hippocampus-memory/scripts/preprocess.sh:16 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/impkind/hippocampus-memory/scripts/recall.sh:15 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/impkind/hippocampus-memory/scripts/reflect.sh:7 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/impkind/hippocampus-memory/scripts/summarize-pending.sh:14 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/impkind/hippocampus-memory/scripts/sync-core.sh:11 | |
| MEDIUM | Prompt Injection / XSS via Unescaped HTML in Dashboard The `generate-dashboard.sh` script generates an HTML dashboard by directly embedding variables like `$AGENT_NAME` (derived from `IDENTITY.md`) without proper HTML escaping. If `IDENTITY.md` contains malicious HTML or JavaScript (e.g., `<script>alert('XSS')</script>`), this content will be injected into the `brain-dashboard.html`. This could lead to Cross-Site Scripting (XSS) in the local browser when the dashboard is opened, or prompt injection if the HTML is later fed as context to an LLM. Before embedding any user-controlled or dynamically generated text into HTML, ensure it is properly HTML-escaped to prevent XSS. For LLM context, consider stripping all HTML tags or explicitly marking sections as non-executable code. | LLM | scripts/generate-dashboard.sh:100 | |
| MEDIUM | Prompt Injection via Memory Content in LLM Context The `load-core.sh`, `recall.sh`, and `sync-core.sh` scripts output memory content (`m['content']` from `index.json`) directly. These outputs are explicitly intended for 'context injection' into an LLM or for RAG (`HIPPOCAMPUS_CORE.md` is added to `memorySearch.extraPaths`). If the `index.json` file is compromised and contains malicious prompt injection attempts within memory content, these attempts will be directly fed to the LLM, potentially manipulating its behavior or extracting sensitive information. Implement input validation and sanitization for all memory content stored in `index.json`. When feeding memory content to an LLM, consider using techniques like XML/JSON tagging or specific delimiters to clearly separate data from instructions, and apply strict content filtering to prevent prompt injection. | LLM | scripts/load-core.sh:34 |
Scan History
Embed Code
[](https://skillshield.io/report/c7d976b89017fbcd)
Powered by SkillShield