Trust Assessment
researchvault received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 20 findings: 4 critical, 7 high, 9 medium, and 0 low severity. Key findings include Unsafe environment variable passthrough, Arbitrary command execution, Credential harvesting.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings20
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/lraivisto/vault-research-brain/tests/test_smoke_cli.py:6 | |
| CRITICAL | Credential harvesting Bulk environment variable dump Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/lraivisto/vault-research-brain/tests/test_mcp_server.py:12 | |
| CRITICAL | Credential harvesting Bulk environment variable dump Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/lraivisto/vault-research-brain/tests/test_smoke_cli.py:18 | |
| CRITICAL | Credential harvesting Bulk environment variable dump Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/lraivisto/vault-research-brain/tests/test_smoke_cli.py:60 | |
| HIGH | Unsafe environment variable passthrough Bulk environment variable harvesting Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/lraivisto/vault-research-brain/tests/test_mcp_server.py:12 | |
| HIGH | Unsafe environment variable passthrough Bulk environment variable harvesting Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/lraivisto/vault-research-brain/tests/test_smoke_cli.py:18 | |
| HIGH | Unsafe environment variable passthrough Bulk environment variable harvesting Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/lraivisto/vault-research-brain/tests/test_smoke_cli.py:60 | |
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/lraivisto/vault-research-brain/tests/conftest.py:46 | |
| HIGH | Hidden network beacons / undisclosed telemetry DNS resolution call that could be used for data exfiltration Remove undisclosed network calls and telemetry. All outbound communication should be documented and necessary for the skill's stated purpose. BCC injection in email tools is almost always malicious. | Manifest | skills/lraivisto/vault-research-brain/scripts/scuttle.py:35 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function '_run_cli'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/lraivisto/vault-research-brain/tests/test_smoke_cli.py:6 | |
| HIGH | Arbitrary File Read via Artifact Path The `add_artifact` function in `scripts/core.py` and the `vault_add_artifact` MCP tool (exposed in `scripts/mcp_server.py`) allow a user or agent to specify an arbitrary file path (`path` argument). The `scripts/synthesis.py` module's `_read_text_file` function then reads the content of this file. This enables an attacker to specify paths to sensitive files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`) on the system, leading to their content being read and potentially exfiltrated or processed by the LLM. Although `core.add_artifact` checks for file existence and readability, it does not restrict the file's location to a safe, sandboxed directory. Restrict artifact paths to a designated, sandboxed directory (e.g., within the project's workspace). Implement path sanitization to prevent directory traversal attacks (e.g., `os.path.abspath` followed by checking if it's within allowed prefixes). For critical paths, require explicit user confirmation or a whitelist of allowed file types/locations. | LLM | scripts/core.py:199 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vault-research-brain/scripts/core.py:7 | |
| MEDIUM | Suspicious import: socket Import of 'socket' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vault-research-brain/scripts/scuttle.py:5 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vault-research-brain/scripts/scuttle.py:7 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vault-research-brain/tests/test_grokipedia.py:2 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vault-research-brain/tests/test_youtube.py:2 | |
| MEDIUM | Unpinned Python dependency version Dependency 'requests>=2.32.0' is not pinned to an exact version. Pin Python dependencies with exact versions where feasible. | Dependencies | skills/lraivisto/vault-research-brain/pyproject.toml | |
| MEDIUM | Sensitive Data Exfiltration via Watchdog Search Queries The `watchdog` feature in `scripts/watchdog.py` allows users or agents to define `query` targets that are periodically executed. These queries are passed to `core.perform_brave_search` (an external search API). If a user-controlled query contains sensitive information, this data will be sent to the external search provider, leading to data exfiltration. Implement strict sanitization and validation for watchdog queries to remove or redact sensitive information. Warn users about the privacy implications of submitting sensitive data to external search APIs. Consider using a local, private search index for sensitive queries. | LLM | scripts/watchdog.py:140 | |
| MEDIUM | Broad Agent Tool Access Without Fine-Grained Controls The `scripts/mcp_server.py` exposes a comprehensive set of tools (`vault_list_projects`, `vault_create_project`, `vault_add_finding`, `vault_add_artifact`, `vault_synthesize`, etc.) to any connected agent. While these are core functionalities, the current implementation lacks fine-grained access control or robust input validation specific to the agent context. A malicious or compromised agent could abuse these broad permissions to inject arbitrary data into the vault, trigger resource-intensive operations, or potentially exploit other vulnerabilities (like the arbitrary file read via `vault_add_artifact`). This increases the attack surface for agent-based attacks. Implement authentication and authorization mechanisms for MCP tools. Introduce granular permissions based on agent identity or role. Add more stringent input validation and rate-limiting for agent-provided arguments to prevent abuse and resource exhaustion. | LLM | scripts/mcp_server.py:15 | |
| MEDIUM | Untrusted Content Ingestion as Potential Prompt Injection Vector The skill ingests and stores large amounts of untrusted content from various external sources (web pages via `scuttle.py`, search results via `watchdog.py`, user-specified artifact files via `core.py`). This content (titles, descriptions, file contents, findings, hypotheses) is stored in the database and later used for internal processing (e.g., embedding in `synthesis.py`) and potentially for generating responses or prompts for other LLMs. If this untrusted content is not properly sanitized or escaped before being incorporated into an LLM prompt, it could be used to manipulate the host LLM or other agents, leading to prompt injection attacks. Implement robust sanitization and escaping mechanisms for all untrusted ingested content before it is used in LLM prompts. Clearly delineate trusted instructions from untrusted data when constructing prompts. Consider using LLM-specific prompt templating libraries that offer built-in protections. | LLM | scripts/core.py:307 |
Scan History
Embed Code
[](https://skillshield.io/report/e99f3d0e8f1e6f9f)
Powered by SkillShield