Security Audit
teng-lin/notebooklm-py:src/notebooklm/data
github.com/teng-lin/notebooklm-pyTrust Assessment
teng-lin/notebooklm-py:src/notebooklm/data received a trust score of 30/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 1 critical, 2 high, 1 medium, and 1 low severity. Key findings include Potential Command Injection via Subagent Task Prompt, Excessive Filesystem Write Permissions via Download Command, Excessive Filesystem Read Permissions via Source Add Command.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 31/100, indicating areas for improvement.
Last analyzed on February 28, 2026 (commit 9eb13cea). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via Subagent Task Prompt The skill documentation demonstrates using the `Task` tool with a `prompt` that directly interpolates user-controlled variables (`{artifact_id}`, `{notebook_id}`, `{source_ids}`) into shell commands. If these IDs can be influenced by untrusted user input and contain shell metacharacters (e.g., `;`, `&`, `|`, `$(...)`), a malicious user could inject arbitrary commands to be executed by the subagent. This is a common vulnerability when constructing shell commands via string interpolation without proper sanitization or escaping. Implement robust input sanitization and shell escaping for all variables interpolated into shell commands within `Task` prompts. Consider using a command execution library that handles argument separation and escaping automatically, or explicitly quote and escape all user-controlled inputs (e.g., `shlex.quote()` in Python) before passing them to the shell. | Static | SKILL.md:254 | |
| HIGH | Excessive Filesystem Write Permissions via Download Command The `notebooklm download` command allows the agent to write files to arbitrary paths on the filesystem. If the agent can be prompted by untrusted input to specify a sensitive or critical system path (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, or system binaries), it could lead to data corruption, denial of service, or further compromise. While the autonomy rules state 'Ask before running', a sophisticated prompt injection could trick the agent into confirming a malicious download path. Restrict the `download` command to a predefined, sandboxed directory for output files. Do not allow arbitrary paths to be specified by user input. If arbitrary paths are necessary, implement strict path validation and sanitization to prevent directory traversal and writing to sensitive locations. Ensure the agent's execution environment has minimal necessary filesystem permissions. | Static | SKILL.md:146 | |
| HIGH | Excessive Filesystem Read Permissions via Source Add Command The `notebooklm source add ./file.pdf` command allows the agent to read and upload arbitrary local files. If the agent can be prompted by untrusted input to specify a sensitive file path (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, or application configuration files), it could lead to data exfiltration by uploading the content of these files to NotebookLM. Restrict the `source add` command to a predefined, sandboxed directory for input files. Do not allow arbitrary paths to be specified by user input. If arbitrary paths are necessary, implement strict path validation and sanitization. Ensure the agent's execution environment has minimal necessary filesystem read permissions. | Static | SKILL.md:132 | |
| MEDIUM | Potential Data Exfiltration via Programmatic Sharing The skill explicitly mentions 'Programmatic sharing' via `share` commands as a feature beyond the web UI. If the agent can be prompted to use these commands to share a notebook containing sensitive information with an unauthorized external entity, it constitutes a data exfiltration risk. The lack of explicit 'Ask before running' for `share` commands in the autonomy rules further exacerbates this risk. If programmatic sharing is enabled, ensure that the agent is explicitly configured to 'Ask before running' for all `share` commands. Implement strict access controls and auditing for sharing operations. Consider whether programmatic sharing is truly necessary for the agent's intended function, and disable it if not. | Static | SKILL.md:200 | |
| LOW | Sensitive Credentials in Environment Variable (NOTEBOOKLM_AUTH_JSON) The `NOTEBOOKLM_AUTH_JSON` environment variable is designed to hold the contents of `storage_state.json`, which contains sensitive authentication tokens. While the documentation advises setting it 'from a secret', the reliance on an environment variable for sensitive data carries an inherent risk. If the agent's environment is misconfigured, or if the agent inadvertently logs its environment variables, these credentials could be exposed. While the documentation advises secure usage, consider alternative methods for passing credentials that avoid direct exposure in environment variables, such as secure credential stores or temporary file-based mechanisms with strict permissions. If environment variables must be used, emphasize the critical importance of secure secret management practices. | Static | SKILL.md:60 |
Scan History
Embed Code
[](https://skillshield.io/report/86ceca355ae0f721)
Powered by SkillShield