Trust Assessment
pdf received a trust score of 55/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 10 findings: 0 critical, 6 high, 2 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Covert behavior / concealment directives, Arbitrary File Read via Script Argument.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 3/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings10
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary File Read via Script Argument The script `check_bounding_boxes.py` reads a JSON file specified by `sys.argv[1]`. If an attacker can control this argument, they can instruct the agent to read arbitrary files from the filesystem (e.g., `/etc/passwd`, sensitive configuration files). The content, or at least its existence and parseability as JSON, could be revealed through stdout messages or error outputs. Implement strict input validation for file paths. Ensure the agent's execution environment has minimal necessary filesystem permissions (e.g., sandboxing, chroot, specific allow-lists for directories). Do not allow arbitrary file paths from untrusted user input. | LLM | scripts/check_bounding_boxes.py:46 | |
| HIGH | Arbitrary File Read and Write via Script Arguments The script `convert_pdf_to_images.py` reads a PDF file specified by `sys.argv[1]` and writes output images to a directory specified by `sys.argv[2]`. An attacker controlling these arguments could read arbitrary PDF files and write image data to arbitrary locations on the filesystem, potentially overwriting critical files or causing a denial of service by filling up disk space. Implement strict input validation for file paths and output directories. Ensure the agent's execution environment has minimal necessary filesystem permissions (e.g., sandboxing, chroot, specific allow-lists for directories). Do not allow arbitrary file paths or output directories from untrusted user input. | LLM | scripts/convert_pdf_to_images.py:30 | |
| HIGH | Arbitrary File Read and Write via Script Arguments The script `create_validation_image.py` reads a JSON file (`sys.argv[2]`) and an input image (`sys.argv[3]`), and writes an output image to a path specified by `sys.argv[4]`. An attacker controlling these arguments could read arbitrary files (JSON or image) and write image data to arbitrary locations on the filesystem, potentially overwriting critical files or causing a denial of service. Implement strict input validation for all file paths. Ensure the agent's execution environment has minimal necessary filesystem permissions (e.g., sandboxing, chroot, specific allow-lists for directories). Do not allow arbitrary file paths from untrusted user input. | LLM | scripts/create_validation_image.py:40 | |
| HIGH | Arbitrary File Read and Metadata Exfiltration via Script Arguments The script `extract_form_field_info.py` reads a PDF file specified by `sys.argv[1]` and writes extracted form field metadata to a JSON file specified by `sys.argv[2]`. An attacker controlling these arguments could read arbitrary PDF files (potentially containing sensitive information) and exfiltrate their structural metadata (field names, types, bounding boxes) to an arbitrary location on the filesystem or a network path (e.g., `/dev/stdout` or a mounted network share). Implement strict input validation for all file paths. Ensure the agent's execution environment has minimal necessary filesystem permissions (e.g., sandboxing, chroot, specific allow-lists for directories). Do not allow arbitrary file paths from untrusted user input. | LLM | scripts/extract_form_field_info.py:154 | |
| HIGH | Arbitrary File Read and Write with Content Injection via Script Arguments The script `fill_fillable_fields.py` reads an input PDF (`sys.argv[1]`) and a JSON file (`sys.argv[2]`) containing form data, then writes a modified PDF to `sys.argv[3]`. An attacker controlling these arguments could read arbitrary PDF files, inject arbitrary data (from the JSON) into the PDF, and write the resulting document to an arbitrary location. This could be used for data exfiltration, creating malicious PDFs (e.g., with embedded scripts or large data to cause DoS), or overwriting critical files. Implement strict input validation for all file paths. Ensure the agent's execution environment has minimal necessary filesystem permissions (e.g., sandboxing, chroot, specific allow-lists for directories). Do not allow arbitrary file paths or untrusted content for PDF modification from user input. | LLM | scripts/fill_fillable_fields.py:98 | |
| HIGH | Arbitrary File Read and Write with Content Injection via Script Arguments The script `fill_pdf_form_with_annotations.py` reads an input PDF (`sys.argv[1]`) and a JSON file (`sys.argv[2]`) containing annotation data, then writes a modified PDF to `sys.argv[3]`. Similar to `fill_fillable_fields.py`, an attacker controlling these arguments could read arbitrary PDF files, inject arbitrary text annotations (from the JSON) into the PDF, and write the resulting document to an arbitrary location. This poses risks of data exfiltration, creating malicious PDFs, or overwriting critical files. Implement strict input validation for all file paths. Ensure the agent's execution environment has minimal necessary filesystem permissions (e.g., sandboxing, chroot, specific allow-lists for directories). Do not allow arbitrary file paths or untrusted content for PDF annotation from user input. | LLM | scripts/fill_pdf_form_with_annotations.py:90 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| MEDIUM | Arbitrary File Read for Existence/Type Confirmation The script `check_fillable_fields.py` reads a PDF file specified by `sys.argv[1]`. While it only prints a simple string indicating whether the PDF has fillable fields, an attacker controlling this argument could use it to confirm the existence of arbitrary PDF files on the filesystem. This is a weaker form of data exfiltration but still provides information about the system's file structure. Implement strict input validation for file paths. Ensure the agent's execution environment has minimal necessary filesystem permissions (e.g., sandboxing, chroot, specific allow-lists for directories). Do not allow arbitrary file paths from untrusted user input. | LLM | scripts/check_fillable_fields.py:8 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 | |
| INFO | Runtime Monkey Patching of Third-Party Library The script `fill_fillable_fields.py` includes a `monkeypatch_pydpf_method()` function that modifies the behavior of the `pypdf` library at runtime. While intended to fix a specific bug, monkey patching can introduce instability, make debugging difficult, and potentially lead to unexpected behavior or vulnerabilities if not carefully managed, especially across library updates. Consider contributing the fix upstream to the `pypdf` library. If not possible, document the monkey patch thoroughly and ensure compatibility is re-verified with every `pypdf` update. Evaluate if there's an alternative approach that doesn't require modifying library internals. | LLM | scripts/fill_fillable_fields.py:70 |
Scan History
Embed Code
[](https://skillshield.io/report/2b5f3f04663b7b8c)
Powered by SkillShield