Trust Assessment
pdf received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection in Bash Snippets, Excessive File System Permissions / Arbitrary File Access via Python Scripts.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection in Bash Snippets The `SKILL.md` document contains bash code snippets that directly execute command-line utilities like `pdftotext`, `qpdf`, `pdftk`, and `pdfimages`. If the filenames or other arguments passed to these commands are derived directly from untrusted user input without proper sanitization (e.g., escaping shell metacharacters), an attacker could inject arbitrary shell commands, leading to remote code execution. When executing external commands with user-controlled input, use a library function that properly escapes arguments (e.g., `shlex.quote` in Python) or pass arguments as a list to `subprocess.run` to avoid shell interpretation. Avoid direct string concatenation for command construction. | LLM | SKILL.md:108 | |
| HIGH | Excessive File System Permissions / Arbitrary File Access via Python Scripts Multiple Python scripts within the `scripts/` directory (e.g., `convert_pdf_to_images.py`, `create_validation_image.py`, `extract_form_field_info.py`, `fill_fillable_fields.py`, `fill_pdf_form_with_annotations.py`) accept file paths directly from `sys.argv` without validation or sandboxing. If an AI agent invokes these scripts with untrusted user-provided paths, it could lead to reading arbitrary files from the file system (data exfiltration) or writing to arbitrary locations, potentially overwriting critical system files or exfiltrating data to attacker-controlled locations. Implement strict input validation for all file paths provided by users. Restrict file operations to a designated, isolated working directory (sandbox). Avoid allowing arbitrary file paths, especially for write operations. Consider using temporary files or a virtual file system for sensitive operations. | LLM | scripts/convert_pdf_to_images.py:26 |
Scan History
Embed Code
[](https://skillshield.io/report/d935d0e998de7c4c)
Powered by SkillShield