Trust Assessment
vault received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 20 findings: 5 critical, 7 high, 7 medium, and 0 low severity. Key findings include Unsafe environment variable passthrough, Arbitrary command execution, Credential harvesting.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings20
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/lraivisto/researchvault-brain/tests/test_smoke_cli.py:6 | |
| CRITICAL | Credential harvesting Bulk environment variable dump Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/lraivisto/researchvault-brain/tests/test_mcp_server.py:12 | |
| CRITICAL | Credential harvesting Bulk environment variable dump Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/lraivisto/researchvault-brain/tests/test_smoke_cli.py:18 | |
| CRITICAL | Credential harvesting Bulk environment variable dump Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/lraivisto/researchvault-brain/tests/test_smoke_cli.py:60 | |
| CRITICAL | Arbitrary File Read via 'vault_add_artifact' tool The 'vault_add_artifact' tool, exposed to the LLM agent via 'scripts/mcp_server.py', accepts an arbitrary 'path' argument. This path is then used by 'scripts/synthesis.py's '_read_text_file' function to read the content of the specified file. An attacker could instruct the LLM agent to add an artifact with a sensitive file path (e.g., '/etc/passwd', '~/.ssh/id_rsa'), leading to the skill reading and potentially exfiltrating the content of that file by storing it in the database and making it accessible to the LLM. Restrict the 'path' argument in 'vault_add_artifact' to a specific, sandboxed directory (e.g., '~/.researchvault/artifacts/') or implement strict allow-listing for file types/locations. Do not allow arbitrary file paths to be read. | LLM | scripts/mcp_server.py:144 | |
| HIGH | Unsafe environment variable passthrough Bulk environment variable harvesting Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/lraivisto/researchvault-brain/tests/test_mcp_server.py:12 | |
| HIGH | Unsafe environment variable passthrough Bulk environment variable harvesting Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/lraivisto/researchvault-brain/tests/test_smoke_cli.py:18 | |
| HIGH | Unsafe environment variable passthrough Bulk environment variable harvesting Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/lraivisto/researchvault-brain/tests/test_smoke_cli.py:60 | |
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/lraivisto/researchvault-brain/tests/conftest.py:46 | |
| HIGH | Hidden network beacons / undisclosed telemetry DNS resolution call that could be used for data exfiltration Remove undisclosed network calls and telemetry. All outbound communication should be documented and necessary for the skill's stated purpose. BCC injection in email tools is almost always malicious. | Manifest | skills/lraivisto/researchvault-brain/scripts/scuttle.py:35 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function '_run_cli'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/lraivisto/researchvault-brain/tests/test_smoke_cli.py:6 | |
| HIGH | Prompt Injection via unsanitized stored content The skill stores various user-controlled or externally-sourced text inputs (e.g., project objectives, finding titles/content, hypothesis statements, event payloads, search queries, artifact metadata) directly into its SQLite database. If this stored content is later retrieved and directly inserted into prompts for the host LLM without proper sanitization or escaping, a malicious input could manipulate the LLM's behavior, leading to prompt injection attacks. This is particularly relevant for 'vault_add_finding' and 'vault_add_artifact' tools, as well as content ingested by 'scuttle' or 'watchdog'. Implement robust input sanitization and output encoding for all user-controlled and externally-sourced strings before they are stored in the database and, crucially, before they are used in any LLM prompt. Consider using a templating engine with auto-escaping or explicit escaping functions when constructing LLM prompts from stored data. | LLM | scripts/core.py:100 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/researchvault-brain/scripts/core.py:7 | |
| MEDIUM | Suspicious import: socket Import of 'socket' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/researchvault-brain/scripts/scuttle.py:5 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/researchvault-brain/scripts/scuttle.py:7 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/researchvault-brain/tests/test_grokipedia.py:2 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/lraivisto/researchvault-brain/tests/test_youtube.py:2 | |
| MEDIUM | Unpinned Python dependency version Dependency 'requests>=2.32.0' is not pinned to an exact version. Pin Python dependencies with exact versions where feasible. | Dependencies | skills/lraivisto/researchvault-brain/pyproject.toml | |
| MEDIUM | Data Exfiltration via arbitrary export file path The 'vault export' CLI command allows users to specify an arbitrary '--output' file path. If the exported content (e.g., project summaries, findings, insights) contains sensitive research data, an attacker could direct this output to a location they control or a publicly accessible directory, leading to data exfiltration. Restrict the '--output' path to a designated export directory within the skill's controlled environment (e.g., '~/.researchvault/exports/') or prompt for explicit user confirmation for paths outside this scope. | LLM | scripts/vault.py:36 | |
| INFO | Dependencies pinned to minimum versions, not exact The 'pyproject.toml' specifies dependencies using minimum version constraints (e.g., 'requests>=2.32.0') rather than exact versions. While this is better than unpinned dependencies, it allows for minor version updates which could potentially introduce vulnerabilities, breaking changes, or unexpected behavior without explicit review. Pin all dependencies to exact versions (e.g., 'requests==2.32.0') to ensure deterministic builds and prevent unexpected changes from minor version updates. Use a dependency management tool (like 'pip-compile' or 'poetry lock') to generate and maintain a lock file with exact versions. | LLM | pyproject.toml:10 |
Scan History
Embed Code
[](https://skillshield.io/report/c02b0bbbfdf0ad37)
Powered by SkillShield