Trust Assessment
invoice-template received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Arbitrary File Write via Untrusted Output Path, Unpinned Dependencies in Installation Instructions, Potential for Prompt Injection via Data Input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary File Write via Untrusted Output Path The skill's manifest declares `file_operations` and the example code demonstrates writing PDF files to a user-specified `output_path`. If the `output_path` is derived from untrusted user input, an attacker could specify arbitrary file paths, leading to overwriting system files, writing to sensitive directories, or exfiltrating data by writing it to publicly accessible locations. This grants excessive write permissions to the skill based on user input, posing a significant security risk. Implement strict validation and sanitization of `output_path`. Restrict file writes to a designated, sandboxed directory. Do not allow arbitrary paths from user input. Consider using a UUID or similar for filenames within a secure directory to prevent path traversal and arbitrary file creation. | LLM | SKILL.md:60 | |
| HIGH | Unpinned Dependencies in Installation Instructions The `Installation` section suggests installing Python packages (`python-docx`, `openpyxl`, `python-pptx`, `reportlab`, `jinja2`) without specifying exact versions. This practice can lead to supply chain vulnerabilities, as future installations might pull in newer versions of these libraries that contain security flaws, breaking changes, or even malicious code if a package maintainer's account is compromised. This lack of version pinning makes the skill susceptible to dependency confusion or known vulnerabilities in later versions. Pin all dependencies to specific, known-good versions (e.g., `package==1.2.3`) in a `requirements.txt` file. Regularly audit and update these dependencies after verifying their security and compatibility. | LLM | SKILL.md:149 | |
| MEDIUM | Potential for Prompt Injection via Data Input The skill processes user-provided data (e.g., `invoice_data`) to generate PDFs. While the examples show structured data, the LLM might be prompted to generate or modify this data based on untrusted user input. If the LLM is not properly instructed to sanitize or validate user-provided text before incorporating it into the `invoice_data` dictionary, an attacker could craft malicious input that attempts to manipulate the host LLM's subsequent actions or extract information by injecting instructions within fields like `description` or `notes`. Implement robust input validation and sanitization for all user-provided data fields before they are used to construct the `invoice_data` dictionary. Explicitly instruct the LLM to treat user input as data, not instructions, and to filter out any potential prompt injection attempts or malicious content. | LLM | SKILL.md:25 |
Scan History
Embed Code
[](https://skillshield.io/report/0dad28340f36906a)
Powered by SkillShield