Trust Assessment
pptx received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 26 findings: 16 critical, 8 high, 1 medium, and 1 low severity. Key findings include Arbitrary command execution, Dangerous call: subprocess.run(), Network egress to untrusted endpoints.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings26
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/scientific/document-skills/pptx/ooxml/scripts/pack.py:103 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/scientific/document-skills/pptx/ooxml/scripts/validation/redlining.py:153 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/scientific/document-skills/pptx/ooxml/scripts/validation/redlining.py:185 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/scientific/document-skills/pptx/scripts/thumbnail.py:219 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/scientific/document-skills/pptx/scripts/thumbnail.py:237 | |
| CRITICAL | Prompt Injection leading to Command Injection The skill's instructions explicitly direct the LLM to execute shell commands using `python` and `grep`. If user-controlled input (e.g., 'your diagram description', 'path-to-file.pptx', search patterns) is directly interpolated into these commands without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. This is a direct prompt injection vector that leads to command injection. Instruct the LLM to refuse direct execution of shell commands or to strictly sanitize and escape all user-provided input before passing it to any shell command. Implement a robust sandboxing mechanism for command execution. For `grep`, consider using a programmatic XML parser instead of shell commands. | Static | SKILL.md:20 | |
| CRITICAL | Prompt Injection leading to Command Injection The skill's instructions explicitly direct the LLM to execute shell commands using `python`. If user-controlled input (e.g., 'path-to-file.pptx') is directly interpolated into this command without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. This is a direct prompt injection vector that leads to command injection. Instruct the LLM to refuse direct execution of shell commands or to strictly sanitize and escape all user-provided input before passing it to any shell command. Implement a robust sandboxing mechanism for command execution. | Static | SKILL.md:46 | |
| CRITICAL | Prompt Injection leading to Command Injection The skill's instructions explicitly direct the LLM to execute shell commands using `python`. If user-controlled input (e.g., '<office_file>', '<output_dir>') is directly interpolated into this command without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. This is a direct prompt injection vector that leads to command injection. Instruct the LLM to refuse direct execution of shell commands or to strictly sanitize and escape all user-provided input before passing it to any shell command. Implement a robust sandboxing mechanism for command execution. | Static | SKILL.md:56 | |
| CRITICAL | Prompt Injection leading to Command Injection The skill's instructions explicitly direct the LLM to execute shell commands using `grep`. If user-controlled input (e.g., search patterns, file paths) is directly interpolated into this command without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. This is a direct prompt injection vector that leads to command injection. Instruct the LLM to refuse direct execution of shell commands or to strictly sanitize and escape all user-provided input before passing it to any shell command. Implement a robust sandboxing mechanism for command execution. Prefer programmatic parsing over shell utilities like `grep` for structured data. | Static | SKILL.md:77 | |
| CRITICAL | Prompt Injection leading to Command Injection The skill's instructions explicitly direct the LLM to execute shell commands using `python`. If user-controlled input (e.g., '<dir>', '<file>') is directly interpolated into this command without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. This is a direct prompt injection vector that leads to command injection. Instruct the LLM to refuse direct execution of shell commands or to strictly sanitize and escape all user-provided input before passing it to any shell command. Implement a robust sandboxing mechanism for command execution. | Static | SKILL.md:260 | |
| CRITICAL | Prompt Injection leading to Command Injection The skill's instructions explicitly direct the LLM to execute shell commands using `python`. If user-controlled input (e.g., '<input_directory>', '<office_file>') is directly interpolated into this command without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. This is a direct prompt injection vector that leads to command injection. Instruct the LLM to refuse direct execution of shell commands or to strictly sanitize and escape all user-provided input before passing it to any shell command. Implement a robust sandboxing mechanism for command execution. | Static | SKILL.md:264 | |
| CRITICAL | Prompt Injection leading to Command Injection The skill's instructions explicitly direct the LLM to execute shell commands using `python`. If user-controlled input (e.g., 'template.pptx', 'template-content.md') is directly interpolated into this command without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. This is a direct prompt injection vector that leads to command injection. Instruct the LLM to refuse direct execution of shell commands or to strictly sanitize and escape all user-provided input before passing it to any shell command. Implement a robust sandboxing mechanism for command execution. | Static | SKILL.md:275 | |
| CRITICAL | Prompt Injection leading to Command Injection The skill's instructions explicitly direct the LLM to execute shell commands using `python`. If user-controlled input (e.g., 'template.pptx') is directly interpolated into this command without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. This is a direct prompt injection vector that leads to command injection. Instruct the LLM to refuse direct execution of shell commands or to strictly sanitize and escape all user-provided input before passing it to any shell command. Implement a robust sandboxing mechanism for command execution. | Static | SKILL.md:277 | |
| CRITICAL | Prompt Injection leading to Command Injection The skill's instructions explicitly direct the LLM to execute shell commands using `python`. If user-controlled input (e.g., 'template.pptx', 'working.pptx', '0,34,34,50,52') is directly interpolated into this command without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. This is a direct prompt injection vector that leads to command injection. Instruct the LLM to refuse direct execution of shell commands or to strictly sanitize and escape all user-provided input before passing it to any shell command. Implement a robust sandboxing mechanism for command execution. | Static | SKILL.md:339 | |
| CRITICAL | Prompt Injection leading to Command Injection The skill's instructions explicitly direct the LLM to execute shell commands using `python`. If user-controlled input (e.g., 'working.pptx', 'text-inventory.json') is directly interpolated into this command without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. This is a direct prompt injection vector that leads to command injection. Instruct the LLM to refuse direct execution of shell commands or to strictly sanitize and escape all user-provided input before passing it to any shell command. Implement a robust sandboxing mechanism for command execution. | Static | SKILL.md:347 | |
| CRITICAL | Arbitrary File Write (Zip Slip Vulnerability) The `unpack.py` script uses `zipfile.ZipFile(input_file).extractall(output_path)` where `input_file` is directly taken from command-line arguments (user-controlled). This is a classic Zip Slip vulnerability. A malicious `.pptx` (which is a zip archive) could contain entries with paths like `../../../../etc/passwd`, causing files to be written to arbitrary locations outside the intended `output_path` directory. Implement a secure extraction method that validates each entry's path before extraction. Ensure that no extracted file path attempts to traverse outside the designated `output_path`. A common approach is to check if `os.path.abspath(dest_path).startswith(os.path.abspath(output_path))` for each extracted file. | Static | ooxml/scripts/unpack.py:15 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'validate_document'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | cli-tool/components/skills/scientific/document-skills/pptx/ooxml/scripts/pack.py:103 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function '_get_git_word_diff'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | cli-tool/components/skills/scientific/document-skills/pptx/ooxml/scripts/validation/redlining.py:153 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function '_get_git_word_diff'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | cli-tool/components/skills/scientific/document-skills/pptx/ooxml/scripts/validation/redlining.py:185 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'convert_to_images'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | cli-tool/components/skills/scientific/document-skills/pptx/scripts/thumbnail.py:219 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'convert_to_images'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | cli-tool/components/skills/scientific/document-skills/pptx/scripts/thumbnail.py:237 | |
| HIGH | Data Exfiltration via File URI Scheme in HTML Input The `html2pptx.js` script processes user-provided HTML content. It extracts image sources (`el.src`) from this HTML and, if they start with `file://`, it attempts to embed the local file specified by the URI into the generated PowerPoint presentation. A malicious user could craft HTML with `<img src="file:///path/to/sensitive/file">` to embed and exfiltrate arbitrary local files from the agent's filesystem. Sanitize or disallow `file://` URIs in user-provided HTML image sources. Only allow `http(s)://` or relative paths within a controlled directory. If local files must be embedded, implement a strict whitelist of allowed directories or files, and ensure the paths are canonicalized and validated against directory traversal attempts. | Static | scripts/html2pptx.js:109 | |
| HIGH | Excessive Permissions for External Command Execution The skill's design inherently requires the agent to execute numerous external system commands (`python` scripts, `soffice`, `pdftoppm`, `convert`, `git`). This grants the agent broad permissions to execute arbitrary code on the host system. While `subprocess.run` with a list of arguments mitigates direct shell injection within the Python scripts, the overall attack surface is significantly increased, especially when combined with user-controlled inputs. Implement a strict sandboxing environment for the agent that restricts external command execution to a minimal, whitelisted set of binaries and arguments. Limit filesystem access to only necessary directories. Consider using containerization or virtual environments to isolate the agent's execution. | Static | SKILL.md:20 | |
| HIGH | LLM analysis found no issues despite critical deterministic findings Deterministic layers flagged 16 CRITICAL findings, but LLM semantic analysis returned clean. This may indicate prompt injection or analysis evasion. | LLM | (sanity check) | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/921510c0f198e9c5)
Powered by SkillShield