Trust Assessment
memory-scan received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 16 findings: 4 critical, 7 high, 5 medium, and 0 low severity. Key findings include Unsafe environment variable passthrough, Arbitrary command execution, Credential harvesting.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings16
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dgriffin831/memory-scan/evals/run.py:69 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dgriffin831/memory-scan/scripts/memory-scan.py:132 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/dgriffin831/memory-scan/scripts/memory-scan.py:113 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/dgriffin831/memory-scan/scripts/memory-scan.py:114 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/dgriffin831/memory-scan/scripts/memory-scan.py:113 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/dgriffin831/memory-scan/scripts/memory-scan.py:114 | |
| HIGH | Hardcoded OpenAI API Key detected A hardcoded OpenAI API Key was found. Secrets should be stored in environment variables or a secret manager. Replace the hardcoded secret with an environment variable reference. | Static | skills/dgriffin831/memory-scan/scripts/test-scan.sh:37 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run_scan'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/dgriffin831/memory-scan/evals/run.py:69 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'get_llm_config'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/dgriffin831/memory-scan/scripts/memory-scan.py:132 | |
| HIGH | LLM Prompt Injection Vulnerability The `llm_scan` function in `memory-scan.py` sends untrusted, user-provided memory content (even if redacted for credentials) directly to an external LLM as part of the prompt. The skill's system prompt does not employ robust input isolation techniques (e.g., XML tags or specific delimiters) to prevent the untrusted content from overriding or manipulating the LLM's instructions. An attacker could craft malicious memory entries that, when scanned by the LLM, inject new instructions to bypass the security analysis, exfiltrate the system prompt, or generate arbitrary outputs. Implement robust input isolation for LLM calls. Wrap untrusted `redacted_content` within specific, unambiguous delimiters (e.g., `<user_input>...</user_input>`) and explicitly instruct the LLM in the system prompt to treat everything within these delimiters as data, not instructions. Additionally, consider using LLM safety features or input sanitization specifically designed to neutralize prompt injection attempts. | LLM | scripts/memory-scan.py:120 | |
| HIGH | Path Traversal in Quarantine Script The `quarantine.py` script constructs file paths for backup and redaction using `os.path.join(WORKSPACE, file_path)`. While it checks for absolute paths, it does not prevent path traversal sequences (e.g., `../../`) within the `file_path` argument. An attacker could potentially provide a path like `../../.ssh/id_rsa` to target and modify sensitive files outside the intended memory directories but still within the user's OpenClaw workspace or home directory, leading to data integrity issues or potential information disclosure if backups are later accessed. Sanitize the `file_path` argument to prevent path traversal. Use `os.path.abspath()` and ensure the resolved path remains within the `WORKSPACE` directory or a specifically allowed subdirectory. For example, check if `os.path.commonprefix([WORKSPACE, resolved_path]) == WORKSPACE`. | LLM | scripts/quarantine.py:57 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/dgriffin831/memory-scan/SKILL.md:1 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/dgriffin831/memory-scan/scripts/memory-scan.py:234 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/dgriffin831/memory-scan/scripts/memory-scan.py:274 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/dgriffin831/memory-scan/scripts/test-scan.sh:7 | |
| MEDIUM | Unpinned Dependencies in Setup Script The `scripts/setup-venv.sh` script installs Python dependencies (`openai`, `anthropic`) without specifying exact version numbers. This practice can lead to supply chain risks, as future versions of these libraries might introduce breaking changes, new vulnerabilities, or even malicious code. Without pinned versions, the skill's behavior could change unexpectedly, or it could become vulnerable to issues in newer library releases. Pin all dependencies to exact versions (e.g., `openai==1.2.3`) in a `requirements.txt` file. The `setup-venv.sh` script should then install from this file (`pip install -r requirements.txt`). Regularly review and update these pinned versions to incorporate security fixes while maintaining control over the dependency tree. | LLM | scripts/setup-venv.sh:17 |
Scan History
Embed Code
[](https://skillshield.io/report/c997d5995c147509)
Powered by SkillShield