Trust Assessment
researchvault received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 21 findings: 4 critical, 8 high, 8 medium, and 1 low severity. Key findings include Unsafe environment variable passthrough, Arbitrary command execution, Credential harvesting.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings21
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/lraivisto/vault-research/tests/test_smoke_cli.py:6 | |
| CRITICAL | Credential harvesting Bulk environment variable dump Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/lraivisto/vault-research/tests/test_mcp_server.py:12 | |
| CRITICAL | Credential harvesting Bulk environment variable dump Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/lraivisto/vault-research/tests/test_smoke_cli.py:18 | |
| CRITICAL | Credential harvesting Bulk environment variable dump Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/lraivisto/vault-research/tests/test_smoke_cli.py:60 | |
| HIGH | Unsafe environment variable passthrough Bulk environment variable harvesting Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/lraivisto/vault-research/tests/test_mcp_server.py:12 | |
| HIGH | Unsafe environment variable passthrough Bulk environment variable harvesting Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/lraivisto/vault-research/tests/test_smoke_cli.py:18 | |
| HIGH | Unsafe environment variable passthrough Bulk environment variable harvesting Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/lraivisto/vault-research/tests/test_smoke_cli.py:60 | |
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/lraivisto/vault-research/tests/conftest.py:46 | |
| HIGH | Hidden network beacons / undisclosed telemetry DNS resolution call that could be used for data exfiltration Remove undisclosed network calls and telemetry. All outbound communication should be documented and necessary for the skill's stated purpose. BCC injection in email tools is almost always malicious. | Manifest | skills/lraivisto/vault-research/scripts/scuttle.py:35 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function '_run_cli'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/lraivisto/vault-research/tests/test_smoke_cli.py:6 | |
| HIGH | Arbitrary Local File Read via Artifact Addition The `add_artifact` function, exposed via both the `vault artifact add` CLI command and the `vault_add_artifact` MCP tool, allows an attacker to specify an arbitrary local file path. The content of this file is then read by `_read_text_file` and stored in the database as part of the artifact's data. This enables an attacker to exfiltrate the content of any file readable by the agent's process, such as `/etc/passwd`, `~/.ssh/id_rsa`, or other sensitive configuration files. Implement strict path validation for `add_artifact`. Restrict paths to a designated, sandboxed directory (e.g., `~/.researchvault/artifacts/<project_id>/`). Disallow absolute paths or paths containing `..`. For agent skills, consider if direct file system access is truly necessary or if content should be passed directly. | LLM | scripts/core.py:604 | |
| HIGH | Arbitrary Local File Write via Project Export The `export_project_data` function, called by the `vault export` CLI command, allows a user to specify an arbitrary `--output` file path. The function then directly writes the project's data (findings, events, etc.) to this path. This could allow an attacker to overwrite critical system files, write to sensitive directories, or create large files to consume disk space, leading to denial of service or system instability. Implement strict path validation for the `--output` argument. Restrict output paths to a designated, sandboxed directory (e.g., `~/.researchvault/exports/`). Disallow absolute paths or paths containing `..`. | LLM | scripts/core.py:561 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vault-research/scripts/core.py:7 | |
| MEDIUM | Suspicious import: socket Import of 'socket' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vault-research/scripts/scuttle.py:5 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vault-research/scripts/scuttle.py:7 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vault-research/tests/test_grokipedia.py:2 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vault-research/tests/test_youtube.py:2 | |
| MEDIUM | Unpinned Python dependency version Dependency 'requests>=2.32.0' is not pinned to an exact version. Pin Python dependencies with exact versions where feasible. | Dependencies | skills/lraivisto/vault-research/pyproject.toml | |
| MEDIUM | Sensitive Data Leakage via External Brave Search API The `perform_brave_search` function, used by the `vault search` CLI command and the `watchdog` module, sends user-provided `query` strings to the external Brave Search API. If an agent or user passes sensitive or private information within these queries, that data will be transmitted to Brave, potentially violating privacy or data security policies. Implement a clear warning or sanitization step for user-provided queries before sending them to external APIs. Advise users not to include sensitive information in search queries. Consider adding a configuration option to disable external search if privacy is paramount. | LLM | scripts/core.py:470 | |
| MEDIUM | Stored Prompt Injection Vector in Findings/Insights User-provided `title` and `content` for findings/insights (via `vault insight add` CLI or `vault_add_finding` MCP tool) are stored directly in the SQLite database. If this stored content is later retrieved and directly fed into a Large Language Model (LLM) without proper sanitization or escaping, it could be exploited for prompt injection, allowing an attacker to manipulate the LLM's behavior or extract information. The `watchdog` module also stores search results as insights, which could contain malicious instructions from external websites. When retrieving and presenting stored findings/insights to an LLM, ensure that the content is properly escaped or passed through a sanitization layer designed to neutralize prompt injection attempts. Clearly document the risk of storing untrusted content that may later be used as LLM input. | LLM | scripts/core.py:375 | |
| LOW | Unpinned Dependencies The `pyproject.toml` file specifies dependencies using minimum versions (e.g., `requests>=2.32.0`) rather than exact versions. This practice can introduce supply chain risks, as newer versions of these dependencies might contain vulnerabilities or breaking changes that could impact the security or stability of the skill without explicit review. Pin dependencies to exact versions (e.g., `requests==2.32.0`) to ensure deterministic builds and prevent unexpected changes from upstream packages. Use a dependency management tool (like `uv` or `pip-tools`) to manage and update dependencies securely. | LLM | pyproject.toml:10 |
Scan History
Embed Code
[](https://skillshield.io/report/6e263822984c2d82)
Powered by SkillShield