Trust Assessment
chatgpt-exporter-ultimate received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 0 high, 2 medium, and 0 low severity. Key findings include Sensitive environment variable access: $HOME, Unquoted variables and unsanitized API output lead to command injection and path traversal in `export.sh`, ChatGPT access token exposed via command-line argument in `export.sh`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Unquoted variables and unsanitized API output lead to command injection and path traversal in `export.sh` The `export.sh` script is highly vulnerable to command injection and path traversal due to improper handling of variables:
1. **Command Injection:** User-provided arguments (`TOKEN`, `OUTPUT_DIR`) and API-derived data (`TITLE`) are directly interpolated into shell commands without proper quoting. A malicious `TOKEN` (e.g., `abc; rm -rf /`) or `OUTPUT_DIR` (e.g., `/tmp; rm -rf /`) could execute arbitrary commands. Similarly, if the ChatGPT API returns a `title` containing shell metacharacters (e.g., `$(evil_command)`), this could be executed when `echo "$TITLE"` or `printf ... "$TITLE"` is called.
2. **Path Traversal:** The `ID` extracted from the API response is used directly in filenames (`"$OUTPUT_DIR/conversations/${ID}.json"`). If a malicious `ID` (e.g., `../evil.json`) is returned by the API, it could lead to writing files outside the intended output directory. 1. **Quote all variables:** Always enclose variable expansions in double quotes (e.g., `"$TOKEN"`, `"$OUTPUT_DIR"`) to prevent word splitting and globbing. 2. **Sanitize API output:** For `TITLE` and `ID`, rigorously sanitize them before use in shell commands or file paths. For `TITLE`, use `printf %s` instead of `echo` or escape special characters. For `ID`, validate it against a strict regex (e.g., `^[a-zA-Z0-9_-]+$`) to prevent path traversal. 3. **Use `curl`'s `-H` with care:** While `curl` generally handles quoted strings in `-H`, it's safer to ensure the token itself is clean or use a more robust method if possible. 4. **For filenames:** Consider using `mktemp` for temporary files or `basename` and `dirname` for path manipulation, combined with strict validation of input components. | LLM | scripts/export.sh:10 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/globalcaos/chatgpt-exporter-ultimate/scripts/export.sh:8 | |
| MEDIUM | ChatGPT access token exposed via command-line argument in `export.sh` The `export.sh` script requires the user to provide their sensitive ChatGPT `accessToken` directly as a command-line argument. This makes the token visible in shell history, process lists (`ps aux`), and potentially system logs, increasing the risk of unauthorized access if the user's local environment is compromised. A more secure approach would be to prompt for the token interactively (e.g., using `read -s`) or retrieve it from a secure environment variable. Avoid passing sensitive credentials as command-line arguments. Instead, prompt the user for the token using `read -s` (for silent input), or instruct them to set it as an environment variable, or retrieve it from a secure credential store. | LLM | scripts/export.sh:7 | |
| INFO | Skill requires high-privilege browser script injection The skill's core functionality relies on the agent's ability to "inject script" into the user's active ChatGPT browser tab. This grants the injected script (e.g., `bookmarklet.js` or the JS strings from `export-conversations.ts`) full access to the DOM, local storage, and network requests within that tab. While necessary for the skill's intended purpose of exporting conversations, this represents a significant privilege. If the agent itself were compromised or if the skill's code were maliciously altered, this capability could be exploited for data exfiltration, session hijacking, or other attacks. This is a design-level trust requirement rather than a direct code vulnerability in the provided snippets. Ensure the agent platform has robust security controls around script injection capabilities, including strict sandboxing, permission prompts, and auditing. Users should be fully aware of the permissions granted when using such skills. The skill itself should adhere to the principle of least privilege within its injected context. | LLM | SKILL.md:22 |
Scan History
Embed Code
[](https://skillshield.io/report/90aa69ec3e2c5acc)
Powered by SkillShield