Trust Assessment
docx received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 13 findings: 3 critical, 5 high, 4 medium, and 1 low severity. Key findings include Arbitrary command execution, Unsafe deserialization / dynamic eval, Dangerous call: subprocess.run().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings13
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/seanphan/docx/ooxml/scripts/pack.py:103 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/seanphan/docx/ooxml/scripts/validation/redlining.py:153 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/seanphan/docx/ooxml/scripts/validation/redlining.py:185 | |
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/seanphan/docx/ooxml/scripts/pack.py:144 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'validate_document'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/seanphan/docx/ooxml/scripts/pack.py:103 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function '_get_git_word_diff'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/seanphan/docx/ooxml/scripts/validation/redlining.py:153 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function '_get_git_word_diff'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/seanphan/docx/ooxml/scripts/validation/redlining.py:185 | |
| HIGH | Agent instructed to execute external commands with unsanitized user input The skill explicitly instructs the agent to execute external commands such as `pandoc`, `python` scripts (`unpack.py`, `pack.py`), `soffice`, and `pdftoppm`. These commands take file paths and other arguments that would originate from user input. If the agent directly substitutes user-provided values into these commands without proper sanitization (e.g., quoting, escaping shell metacharacters), a malicious user could inject arbitrary shell commands. The agent must sanitize all user-provided input before constructing and executing shell commands. This typically involves using `shlex.quote()` in Python or equivalent mechanisms in other languages, or passing arguments as a list to `subprocess.run`. The skill should explicitly warn the agent about this requirement. | LLM | SKILL.md:30 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/seanphan/docx/scripts/document.py:128 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/seanphan/docx/scripts/utilities.py:314 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/seanphan/docx/scripts/utilities.py:337 | |
| MEDIUM | Skill instructs agent to use `sudo` for dependency installation The skill's 'Dependencies' section instructs the agent to use `sudo apt-get install` for installing `pandoc`, `libreoffice`, and `poppler-utils`. Executing commands with `sudo` grants root privileges, which is an elevated permission level. While necessary for system-wide installations, instructing an AI agent to use `sudo` without strict controls can pose a security risk if the agent is compromised or misinterprets instructions, potentially leading to unintended system modifications or privilege escalation. If possible, dependencies should be installed in a sandboxed or containerized environment without requiring root privileges for the agent's runtime. If `sudo` is unavoidable, the agent should be configured with highly restricted `sudoers` rules to only allow specific, necessary commands, or the installation should be a manual, one-time setup step outside of the agent's operational scope. | LLM | SKILL.md:140 | |
| LOW | `subprocess.run` with potentially user-controlled file path in `pack.py` The `pack.py` script's `validate_document` function uses `subprocess.run` to call `soffice` with `doc_path` as an argument. `doc_path` originates from `output_file`, which is an argument to `pack_document` and ultimately from user input via the skill's instructions. While `subprocess.run` with a list of arguments generally prevents shell injection, if `soffice` itself has vulnerabilities where a specially crafted file path (e.g., containing internal command separators or interpreted as a URL/script) could lead to arbitrary code execution, this could be exploited. This is a less common attack vector for `soffice` but remains a theoretical risk. Ensure that `doc_path` is strictly validated to be a safe file path (e.g., only alphanumeric, hyphens, underscores, dots, and valid path separators, no shell metacharacters) before being passed to `subprocess.run`. Relying on `subprocess.run`'s default argument handling is usually sufficient, but explicit validation adds a layer of defense. | LLM | ooxml/scripts/pack.py:109 |
Scan History
Embed Code
[](https://skillshield.io/report/4884e0d5cb90e18a)
Powered by SkillShield