Trust Assessment
report-generator received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Arbitrary file read via user-controlled path in pandas.read_csv, Arbitrary file write via user-controlled output path, Unsanitized user input in HTML generation leading to XSS/Prompt Injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary file write via user-controlled output path The `create_sales_report` function writes generated charts (`_charts.png`) and the HTML report to an `output_path` parameter. If `output_path` is user-controlled and not properly sanitized, an attacker could use path traversal techniques (e.g., `../../../../tmp/malicious.sh` or `/etc/cron.d/malicious_job`) to write files to arbitrary locations on the filesystem. This could lead to overwriting critical system files, placing malicious scripts for later execution (command injection), or data exfiltration by writing sensitive data to publicly accessible locations. The skill's manifest explicitly requests `file_operations` and `code_execution` tools, making this a severe and credible threat. Implement strict input validation for `output_path`. Restrict file writes to a designated, sandboxed directory. Sanitize paths to prevent traversal (e.g., using `os.path.basename` or `pathlib.Path.resolve()` with checks). Ensure the LLM only provides paths within an allowed output directory. | LLM | SKILL.md:115 | |
| HIGH | Arbitrary file read via user-controlled path in pandas.read_csv The `generate_report` and `create_sales_report` functions demonstrate the use of `pandas.read_csv` with a file path parameter (`data` or `csv_path`). If these parameters are derived from untrusted user input without proper sanitization or validation, an attacker could provide a path to an arbitrary file on the system (e.g., `/etc/passwd`, `../../sensitive_data.txt`). This would allow for data exfiltration, as the content of the specified file would be read and potentially processed or returned by the skill. The skill's manifest explicitly requests `file_operations` and `code_execution` tools, making this a credible threat. Implement strict input validation for all file paths. Restrict file operations to a designated, sandboxed directory. Sanitize paths to prevent traversal (e.g., using `os.path.basename` or `pathlib.Path.resolve()` with checks). Consider using a file picker or content ID instead of raw paths, or ensure the LLM only provides paths to files it has explicitly been granted access to. | LLM | SKILL.md:37 | |
| HIGH | Unsanitized user input in HTML generation leading to XSS/Prompt Injection The `generate_html_report` function constructs an HTML string using an f-string, directly embedding the `title` parameter (e.g., `<title>{title}</title>` and `<h1>{title}</h1>`). If the `title` parameter is derived from untrusted user input, an attacker could inject malicious HTML or JavaScript (e.g., `<script>alert('XSS')</script>`) into the generated report. This could lead to Cross-Site Scripting (XSS) when the report is viewed in a browser. Furthermore, if the generated HTML content is ever fed back into an LLM, the injected content could act as a prompt injection, manipulating the LLM's behavior. Sanitize all user-controlled input before embedding it into HTML. Use an HTML templating engine that automatically escapes output (e.g., Jinja2 with autoescape enabled) or manually escape special characters (e.g., `html.escape()` from Python's `html` module) before interpolation. | LLM | SKILL.md:64 |
Scan History
Embed Code
[](https://skillshield.io/report/ada852e91dd2dee0)
Powered by SkillShield