Trust Assessment
researchvault received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 21 findings: 4 critical, 8 high, 8 medium, and 1 low severity. Key findings include Unsafe environment variable passthrough, Arbitrary command execution, Credential harvesting.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings21
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/lraivisto/vaultresearch/tests/test_smoke_cli.py:6 | |
| CRITICAL | Credential harvesting Bulk environment variable dump Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/lraivisto/vaultresearch/tests/test_mcp_server.py:12 | |
| CRITICAL | Credential harvesting Bulk environment variable dump Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/lraivisto/vaultresearch/tests/test_smoke_cli.py:18 | |
| CRITICAL | Credential harvesting Bulk environment variable dump Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/lraivisto/vaultresearch/tests/test_smoke_cli.py:60 | |
| HIGH | Unsafe environment variable passthrough Bulk environment variable harvesting Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/lraivisto/vaultresearch/tests/test_mcp_server.py:12 | |
| HIGH | Unsafe environment variable passthrough Bulk environment variable harvesting Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/lraivisto/vaultresearch/tests/test_smoke_cli.py:18 | |
| HIGH | Unsafe environment variable passthrough Bulk environment variable harvesting Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/lraivisto/vaultresearch/tests/test_smoke_cli.py:60 | |
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/lraivisto/vaultresearch/tests/conftest.py:46 | |
| HIGH | Hidden network beacons / undisclosed telemetry DNS resolution call that could be used for data exfiltration Remove undisclosed network calls and telemetry. All outbound communication should be documented and necessary for the skill's stated purpose. BCC injection in email tools is almost always malicious. | Manifest | skills/lraivisto/vaultresearch/scripts/scuttle.py:35 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function '_run_cli'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/lraivisto/vaultresearch/tests/test_smoke_cli.py:6 | |
| HIGH | Arbitrary File Read via Artifact Path The skill allows adding artifacts by specifying an arbitrary file path. The content of this file is then read and stored in the skill's database. This can be exploited by an attacker to read sensitive files from the system (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, configuration files) and potentially exfiltrate their contents. This functionality is exposed both via the `vault.py` CLI command `artifact add --path <FILE_PATH>` and the `mcp_server.py` tool `vault_add_artifact(path=<FILE_PATH>)`. Implement strict path validation and sandboxing. Restrict artifact paths to a designated, non-sensitive directory. For example, only allow paths relative to a project-specific artifact directory, or use an allow-list for file types/locations. If reading arbitrary files is a core feature, clearly document the security implications and provide strong warnings. | LLM | scripts/vault.py:110 | |
| HIGH | Arbitrary File Write via Export Output Path The `export` command allows a user to specify an `--output` file path. This enables writing potentially sensitive project data (findings, insights, etc.) to any location on the filesystem. An attacker could exploit this to overwrite critical system files, write sensitive data to publicly accessible locations, or inject malicious content into configuration/script files that are later executed. Restrict output file paths to a designated, non-sensitive directory. Implement path sanitization to prevent directory traversal attacks (e.g., `../`). If writing to arbitrary paths is necessary, require explicit user confirmation for paths outside a safe directory. | LLM | scripts/vault.py:40 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vaultresearch/scripts/core.py:7 | |
| MEDIUM | Suspicious import: socket Import of 'socket' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vaultresearch/scripts/scuttle.py:5 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vaultresearch/scripts/scuttle.py:7 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vaultresearch/tests/test_grokipedia.py:2 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/vaultresearch/tests/test_youtube.py:2 | |
| MEDIUM | Unpinned Python dependency version Dependency 'requests>=2.32.0' is not pinned to an exact version. Pin Python dependencies with exact versions where feasible. | Dependencies | skills/lraivisto/vaultresearch/pyproject.toml | |
| MEDIUM | Sensitive Data Exposure via External Search API The watchdog feature allows users to define watch targets of type `query`. These queries are then executed against an external search API (e.g., Brave Search via `core.perform_brave_search`). If sensitive information is included in a user-defined watch query, it will be sent to the external search provider, leading to data exfiltration. Implement content filtering or redaction for sensitive information in user-provided queries before sending them to external APIs. Clearly warn users about the privacy implications of using external search with potentially sensitive data. Consider sandboxing the search functionality or providing an option to use a local, private search index. | LLM | scripts/watchdog.py:155 | |
| MEDIUM | Arbitrary Database Path via Environment Variable The skill's SQLite database path can be overridden by the `RESEARCHVAULT_DB` environment variable. An attacker who can control environment variables in the skill's execution environment could point the database to an arbitrary file on the system. This could lead to data exfiltration (if a sensitive file is opened as a database) or data tampering/denial of service (if a critical system file is overwritten). If possible, restrict the `RESEARCHVAULT_DB` environment variable to a predefined set of safe directories or enforce that it must be a path within the skill's own data directory. Validate the path to ensure it's not pointing to sensitive system locations. | LLM | scripts/db.py:10 | |
| LOW | Unpinned Python Dependencies The `pyproject.toml` specifies Python dependencies using minimum version requirements (e.g., `requests>=2.32.0`) rather than exact versions. This practice can lead to supply chain vulnerabilities if a new version of a dependency is released with a critical security flaw. While not as risky as completely unpinned dependencies, it still introduces a window of vulnerability. Pin all dependencies to exact versions (e.g., `requests==2.32.0`) or use a dependency lock file (e.g., `uv.lock` or `pip-tools`) to ensure deterministic builds and prevent unexpected dependency upgrades. Regularly audit and update dependencies. | LLM | pyproject.toml:10 |
Scan History
Embed Code
[](https://skillshield.io/report/9063a656a9f38bfd)
Powered by SkillShield