Trust Assessment
snyk/agent-scan:tests/skills/pdf received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 11 findings: 1 critical, 7 high, 3 medium, and 0 low severity. Key findings include Command Injection via pdf2image in convert_pdf_to_images.py, Path Traversal in scripts/check_bounding_boxes.py, Path Traversal in scripts/check_fillable_fields.py.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on March 1, 2026 (commit 30a672c5). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings11
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via pdf2image in convert_pdf_to_images.py The `scripts/convert_pdf_to_images.py` script uses `pdf2image.convert_from_path` which internally executes external `poppler-utils` command-line tools (e.g., `pdftoppm`). The `pdf_path` argument is taken directly from `sys.argv[1]` without sanitization. If an attacker provides a `pdf_path` containing shell metacharacters (e.g., `'; rm -rf /; #.pdf'`), it could lead to arbitrary command execution on the host system. Sanitize the `pdf_path` argument to remove or escape all shell metacharacters before passing it to `convert_from_path`. Consider using a library that provides a safer API for interacting with external processes or explicitly escaping arguments for the underlying shell command. | Static | scripts/convert_pdf_to_images.py:10 | |
| HIGH | Path Traversal in scripts/check_bounding_boxes.py The `scripts/check_bounding_boxes.py` script opens a JSON file specified by `sys.argv[1]` without path sanitization. An attacker could provide a path like `../../../../etc/passwd` to read arbitrary files on the system, leading to data exfiltration. Sanitize `sys.argv[1]` to ensure it is a safe, normalized path within an allowed base directory. Use `os.path.abspath` and `os.path.commonprefix` or similar techniques to restrict file access to a designated input folder. | Static | scripts/check_bounding_boxes.py:40 | |
| HIGH | Path Traversal in scripts/check_fillable_fields.py The `scripts/check_fillable_fields.py` script initializes `pypdf.PdfReader` with a PDF path taken directly from `sys.argv[1]` without path sanitization. An attacker could provide a path like `../../../../etc/shadow` to read arbitrary files on the system, leading to data exfiltration. Sanitize `sys.argv[1]` to ensure it is a safe, normalized path within an allowed base directory. Use `os.path.abspath` and `os.path.commonprefix` or similar techniques to restrict file access to a designated input folder. | Static | scripts/check_fillable_fields.py:7 | |
| HIGH | Path Traversal in scripts/convert_pdf_to_images.py The `scripts/convert_pdf_to_images.py` script uses `sys.argv[2]` as an output directory for generated images without path sanitization. If `output_directory` contains path traversal sequences (e.g., `../../evil_dir`), an attacker could write image files to arbitrary locations on the filesystem, potentially overwriting critical files or placing malicious content. Sanitize `output_directory` to ensure it is a safe, normalized path within an allowed base directory. Use `os.path.abspath` and `os.path.commonprefix` or similar techniques to restrict writes to a designated output folder. | Static | scripts/convert_pdf_to_images.py:19 | |
| HIGH | Path Traversal in scripts/create_validation_image.py The `scripts/create_validation_image.py` script takes `fields_json_path` (`sys.argv[2]`), `input_image_path` (`sys.argv[3]`), and `output_image_path` (`sys.argv[4]`) directly from command-line arguments without path sanitization. An attacker could use path traversal sequences (e.g., `../../`) to read arbitrary files (JSON, images) or write generated images to arbitrary locations on the filesystem. Sanitize all file path arguments (`fields_json_path`, `input_image_path`, `output_image_path`) to ensure they are safe, normalized paths within allowed base directories. Restrict file access to designated input/output folders. | Static | scripts/create_validation_image.py:39 | |
| HIGH | Path Traversal in scripts/extract_form_field_info.py The `scripts/extract_form_field_info.py` script takes `pdf_path` (`sys.argv[1]`) and `json_output_path` (`sys.argv[2]`) directly from command-line arguments without path sanitization. An attacker could use path traversal sequences (e.g., `../../`) to read arbitrary PDF files or write extracted JSON data to arbitrary locations on the filesystem. Sanitize all file path arguments (`pdf_path`, `json_output_path`) to ensure they are safe, normalized paths within allowed base directories. Restrict file access to designated input/output folders. | Static | scripts/extract_form_field_info.py:150 | |
| HIGH | Path Traversal in scripts/fill_fillable_fields.py The `scripts/fill_fillable_fields.py` script takes `input_pdf_path` (`sys.argv[1]`), `fields_json_path` (`sys.argv[2]`), and `output_pdf_path` (`sys.argv[3]`) directly from command-line arguments without path sanitization. An attacker could use path traversal sequences (e.g., `../../`) to read arbitrary files (PDFs, JSON) or write the filled PDF to arbitrary locations on the filesystem. Sanitize all file path arguments (`input_pdf_path`, `fields_json_path`, `output_pdf_path`) to ensure they are safe, normalized paths within allowed base directories. Restrict file access to designated input/output folders. | Static | scripts/fill_fillable_fields.py:96 | |
| HIGH | Path Traversal in scripts/fill_pdf_form_with_annotations.py The `scripts/fill_pdf_form_with_annotations.py` script takes `input_pdf_path` (`sys.argv[1]`), `fields_json_path` (`sys.argv[2]`), and `output_pdf_path` (`sys.argv[3]`) directly from command-line arguments without path sanitization. An attacker could use path traversal sequences (e.g., `../../`) to read arbitrary files (PDFs, JSON) or write the annotated PDF to arbitrary locations on the filesystem. Sanitize all file path arguments (`input_pdf_path`, `fields_json_path`, `output_pdf_path`) to ensure they are safe, normalized paths within allowed base directories. Restrict file access to designated input/output folders. | Static | scripts/fill_pdf_form_with_annotations.py:90 | |
| MEDIUM | Resource Exhaustion via JSON parsing Multiple scripts (`check_bounding_boxes.py`, `create_validation_image.py`, `fill_fillable_fields.py`, `fill_pdf_form_with_annotations.py`) use `json.load` to parse input JSON files (e.g., `fields.json`). If these JSON files are crafted by an attacker with deeply nested objects/arrays or extremely long string values, they could lead to excessive memory consumption or CPU usage, resulting in a Denial of Service for the agent. Implement input validation for JSON structure and size. Consider using a streaming JSON parser for very large files or setting limits on nesting depth and string lengths if possible with `json.load` (or a safer alternative). | Static | scripts/check_bounding_boxes.py:15 | |
| MEDIUM | Resource Exhaustion via large font size in PDF annotations The `scripts/fill_pdf_form_with_annotations.py` script reads `font_size` from the `fields.json` file, which is derived from untrusted input. An attacker could specify an excessively large font size (e.g., `1000000`) which could lead to excessive memory consumption or CPU usage during PDF generation, potentially causing a Denial of Service. Validate the `font_size` value from `fields.json` to ensure it falls within a reasonable and safe range (e.g., 1 to 500 points) before using it to create PDF annotations. | Static | scripts/fill_pdf_form_with_annotations.py:68 | |
| MEDIUM | Potential Command Injection in SKILL.md examples The `SKILL.md` provides examples of using command-line tools (`pdftotext`, `qpdf`, `pdftk`). If the agent constructs these commands using unsanitized user input for filenames or other arguments, it could lead to arbitrary command execution on the host system. For instance, if `input.pdf` is replaced with `'; malicious_command; #.pdf'`. While the skill itself doesn't execute these, it provides the patterns for the agent to follow, making it a high-risk instruction for the agent. When constructing shell commands based on user input, ensure all user-provided arguments are properly escaped or sanitized to prevent shell metacharacter injection. Consider using a safer API like `subprocess.run` with `shell=False` and passing arguments as a list, rather than constructing a single shell string. | Static | SKILL.md:140 |
Scan History
Embed Code
[](https://skillshield.io/report/01d803880ce32de3)
Powered by SkillShield