Trust Assessment
japanese-tutor received a trust score of 69/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include User-provided PDFs are uploaded to Google Gemini API, Potential for arbitrary file reading via script arguments, User-generated content written directly to skill's reference files.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | User-provided PDFs are uploaded to Google Gemini API The `scripts/parse_pdf_gemini.py` script explicitly uploads user-provided PDF files to the Google Gemini API for processing. This means the content of any PDF file given to the skill will be sent to a third-party service (Google). While this is the intended functionality for OCR/parsing, it constitutes a significant data exfiltration vector and privacy concern, as sensitive user data within the PDFs could be exposed to Google. Clearly inform users that their PDF documents will be uploaded to Google Gemini for processing. Implement a consent mechanism. Evaluate if local OCR solutions or more privacy-preserving methods can be used. Ensure Google's data retention and privacy policies are acceptable for the intended use case. | LLM | scripts/parse_pdf_gemini.py:15 | |
| HIGH | Potential for arbitrary file reading via script arguments The skill instructs the agent to use `scripts/parse_pdf_gemini.py` and `scripts/parse_docx.py` for parsing user-provided files. These scripts take a file path as a command-line argument (`sys.argv[1]`). If the host LLM allows an attacker to control or influence the `pdf_path` or `docx_path` argument, it could lead to the reading of arbitrary files on the system where the skill is executed. The content of these files could then be processed by the LLM or, in the case of PDFs, uploaded to Google Gemini (as identified in SS-LLM-002). Implement strict input validation and sanitization for file paths passed to these scripts. Ensure that only files explicitly uploaded by the user and stored in a designated, sandboxed directory can be accessed. Prevent traversal attacks (e.g., `../`, absolute paths). | LLM | SKILL.md:29 | |
| MEDIUM | User-generated content written directly to skill's reference files The skill explicitly states it will "Append new vocabulary to `references/vocab.md`", "Append new grammar to `references/grammar.md`", and "create/update `references/lesson_X.md`". The content to be written (new vocabulary, grammar, lesson material) is derived from user-provided documents and the LLM's processing. If this content is not properly sanitized before being written, an attacker could inject malicious instructions or data into these reference files. This could lead to a form of self-prompt injection, where future interactions with the skill are manipulated by the stored malicious content, or data corruption of the skill's internal knowledge base. Implement robust sanitization and validation of all content before it is written to the `references/*.md` files. Ensure that no executable code, special markdown formatting that could be interpreted as instructions, or other malicious payloads can be persisted. Consider using a more structured data storage mechanism (e.g., a database) instead of plain text files to prevent arbitrary content injection and simplify validation. | LLM | SKILL.md:34 |
Scan History
Embed Code
[](https://skillshield.io/report/38d8a6f7f2a673c2)
Powered by SkillShield