Trust Assessment
notebooklm received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 19 findings: 8 critical, 10 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, Unsafe deserialization / dynamic eval, Dangerous call: subprocess.run().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings19
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/guccidgi/notebooklm-skill/scripts/__init__.py:53 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/guccidgi/notebooklm-skill/scripts/__init__.py:65 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/guccidgi/notebooklm-skill/scripts/run.py:38 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/guccidgi/notebooklm-skill/scripts/run.py:91 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/guccidgi/notebooklm-skill/scripts/setup_environment.py:54 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/guccidgi/notebooklm-skill/scripts/setup_environment.py:62 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/guccidgi/notebooklm-skill/scripts/setup_environment.py:75 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/guccidgi/notebooklm-skill/scripts/setup_environment.py:132 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'ensure_venv_and_run'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/guccidgi/notebooklm-skill/scripts/__init__.py:53 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'ensure_venv_and_run'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/guccidgi/notebooklm-skill/scripts/__init__.py:65 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'ensure_venv'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/guccidgi/notebooklm-skill/scripts/run.py:38 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'main'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/guccidgi/notebooklm-skill/scripts/run.py:91 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run_script'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/guccidgi/notebooklm-skill/scripts/setup_environment.py:132 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'ensure_venv'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/guccidgi/notebooklm-skill/scripts/setup_environment.py:54 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'ensure_venv'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/guccidgi/notebooklm-skill/scripts/setup_environment.py:62 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'ensure_venv'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/guccidgi/notebooklm-skill/scripts/setup_environment.py:75 | |
| HIGH | Arbitrary URL storage and navigation The `notebook_manager.py` script allows adding arbitrary URLs to the notebook library without validating that they point to `https://notebooklm.google.com/`. While the `SKILL.md` implies only NotebookLM URLs should be used, the code does not enforce this. Subsequently, `scripts/browser_session.py`, which is designed for persistent browser sessions, navigates to these stored `notebook_url` values without performing a strict domain validation (unlike `ask_question.py` which uses a regex check). This allows a malicious actor to store a non-NotebookLM URL in the library, leading the skill's browser to visit arbitrary websites. This could facilitate data exfiltration (e.g., if the site attempts to read browser state or trick the user into entering credentials in a visible browser) or other malicious activities. Implement strict URL validation in `notebook_manager.py`'s `add_notebook` function to ensure that only URLs matching `https://notebooklm.google.com/` are accepted. This validation should occur before storing the URL in `library.json`. | LLM | scripts/notebook_manager.py:102 | |
| HIGH | Arbitrary URL storage and navigation The `notebook_manager.py` script allows adding arbitrary URLs to the notebook library without validating that they point to `https://notebooklm.google.com/`. While the `SKILL.md` implies only NotebookLM URLs should be used, the code does not enforce this. Subsequently, `scripts/browser_session.py`, which is designed for persistent browser sessions, navigates to these stored `notebook_url` values without performing a strict domain validation (unlike `ask_question.py` which uses a regex check). This allows a malicious actor to store a non-NotebookLM URL in the library, leading the skill's browser to visit arbitrary websites. This could facilitate data exfiltration (e.g., if the site attempts to read browser state or trick the user into entering credentials in a visible browser) or other malicious activities. Ensure all modules that navigate based on stored `notebook_url` values (e.g., `browser_session.py`) perform a domain validation before navigation. A robust solution would be to validate at the point of storage in `notebook_manager.py` to prevent invalid URLs from entering the library in the first place. | LLM | scripts/browser_session.py:64 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/guccidgi/notebooklm-skill/scripts/__init__.py:4 |
Scan History
Embed Code
[](https://skillshield.io/report/0f8e9ac042c2a79c)
Powered by SkillShield