Trust Assessment
context-restore received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 24 findings: 12 critical, 11 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, Unsafe deserialization / dynamic eval, Dangerous call: subprocess.run().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings24
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/alexunitario-sketch/context-restore/scripts/restore_context.py:2150 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/alexunitario-sketch/context-restore/tests/test_full_integration.py:333 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/alexunitario-sketch/context-restore/tests/test_full_integration.py:368 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/alexunitario-sketch/context-restore/tests/test_integration.py:26 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/alexunitario-sketch/context-restore/tests/test_integration.py:42 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/alexunitario-sketch/context-restore/tests/test_integration.py:54 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/alexunitario-sketch/context-restore/tests/test_integration.py:197 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/alexunitario-sketch/context-restore/tests/test_integration.py:212 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/alexunitario-sketch/context-restore/tests/test_integration.py:242 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/alexunitario-sketch/context-restore/tests/test_integration.py:278 | |
| CRITICAL | Arbitrary File Read via CLI Argument The `restore_context.py` script allows users to specify an arbitrary file path via the `--file` or `-f` command-line argument. The script then reads the content of this specified file and processes it. Since the processed content is eventually returned to the host LLM, an attacker can use this to exfiltrate sensitive files from the system (e.g., `/etc/passwd`, `/root/.ssh/id_rsa`, API keys, configuration files) by providing their paths. Implement strict validation and sanitization for file paths provided via command-line arguments. Restrict file access to a predefined, secure sandbox or a whitelist of allowed directories. Ensure that any file paths are resolved using `os.path.abspath` and then checked against allowed base directories to prevent path traversal. Alternatively, remove the ability for users to specify arbitrary file paths if not strictly necessary. | LLM | scripts/restore_context.py:169 | |
| CRITICAL | Path Traversal and Data Exfiltration via Malicious Context Content The `extract_key_projects` function in `scripts/restore_context.py` uses a regular expression `r'项目:\s*([^\n]+)'` to extract project names from the context content. These extracted project names are then passed to `project_progress.get_project_progress` in `scripts/project_progress.py`, which constructs a file path using `os.path.join(PROJECTS_BASE_PATH, project_name)`. An attacker can inject a malicious project name like `../../../etc/passwd` into the context file. This will cause `get_project_progress` to attempt to read `/etc/passwd` (or any other arbitrary file), and its content will be included in the skill's output, leading to data exfiltration. Implement stricter regular expressions for extracting project names, ensuring they only match valid, safe characters (e.g., alphanumeric, spaces, hyphens) and explicitly disallow path traversal sequences like `../`. Before using extracted project names in file path constructions, sanitize them using functions like `os.path.basename` or `pathlib.Path.name` to remove any directory components, or validate them against a whitelist of known project names. | LLM | scripts/restore_context.py:593 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'send_context_change_notification'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/alexunitario-sketch/context-restore/scripts/restore_context.py:2150 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'test_cli_summary_output'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/alexunitario-sketch/context-restore/tests/test_full_integration.py:333 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'test_cli_with_filter'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/alexunitario-sketch/context-restore/tests/test_full_integration.py:368 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'test_default_args'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/alexunitario-sketch/context-restore/tests/test_integration.py:26 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'test_level_argument'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/alexunitario-sketch/context-restore/tests/test_integration.py:42 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'test_summary_argument'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/alexunitario-sketch/context-restore/tests/test_integration.py:54 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'test_nonexistent_file_cli'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/alexunitario-sketch/context-restore/tests/test_integration.py:197 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'test_invalid_level_cli'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/alexunitario-sketch/context-restore/tests/test_integration.py:212 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'test_json_output_with_summary'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/alexunitario-sketch/context-restore/tests/test_integration.py:242 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'test_file_output'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/alexunitario-sketch/context-restore/tests/test_integration.py:278 | |
| HIGH | Prompt Injection via Untrusted Context Content The primary function of this skill is to process and present 'conversation context' from files. The output of this processing is directly returned to the host LLM. Given the identified data exfiltration vulnerabilities (arbitrary file read and path traversal), an attacker can control the content of the context that is fed to the LLM. This allows for direct prompt injection, where malicious instructions or data can be inserted into the LLM's input, potentially manipulating its behavior, extracting sensitive information, or causing unintended actions. Implement robust output sanitization and filtering for all content generated from untrusted sources before it is returned to the LLM. Consider using a separate, sandboxed LLM call for processing untrusted content to isolate potential prompt injection attempts. Additionally, addressing the underlying data exfiltration vulnerabilities will significantly reduce the attack surface for prompt injection. | LLM | scripts/restore_context.py:185 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/alexunitario-sketch/context-restore/scripts/project_progress.py:6 |
Scan History
Embed Code
[](https://skillshield.io/report/40fad3c0d754fc43)
Powered by SkillShield