Security Audit
ailabs-393/ai-labs-claude-skills:packages/skills/finance-manager
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:packages/skills/finance-manager received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 3 high, 0 medium, and 0 low severity. Key findings include Command Injection via LLM Interaction, Path Traversal / Arbitrary File Write in extract_pdf_data.py, Path Traversal / Arbitrary File Write in generate_report.py.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via LLM Interaction The `SKILL.md` documentation describes the skill's intended usage by executing Python scripts via shell commands, taking file paths as arguments (e.g., `python scripts/extract_pdf_data.py <input.pdf> <output.csv>`). If the host LLM constructs these shell commands by directly interpolating user-provided filenames or paths without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. For example, providing `<input.pdf>; rm -rf /` as an input filename could lead to arbitrary command execution on the host system. The host LLM must strictly sanitize and escape all user-provided input before constructing and executing shell commands. Consider using a safer execution mechanism that passes arguments directly to the Python interpreter (e.g., `subprocess.run(['python', 'script.py', arg1, arg2])`) rather than constructing a shell string. | LLM | SKILL.md:30 | |
| HIGH | Path Traversal / Arbitrary File Write in extract_pdf_data.py The `scripts/extract_pdf_data.py` script takes an output CSV file path (`csv_path`) directly from command-line arguments (`sys.argv[2]`) and uses it to write the extracted transaction data. There is no validation or sanitization of this path. A malicious user could provide a path traversal sequence (e.g., `../../../../etc/passwd` or `/tmp/malicious.csv`) to write the extracted data to an arbitrary location on the filesystem. This could lead to overwriting sensitive system files, data exfiltration, or system disruption. Sanitize the `csv_path` argument to prevent path traversal. Ensure the path is within an allowed directory and does not contain `..` sequences. A common approach is to use `os.path.abspath()` and `os.path.join()` with a base directory, and then validate that the resulting path is still within the intended base directory. | LLM | scripts/extract_pdf_data.py:100 | |
| HIGH | Path Traversal / Arbitrary File Write in generate_report.py The `scripts/generate_report.py` script takes an output HTML file path (`html_path`) directly from command-line arguments (`sys.argv[2]`) and uses it to write the generated report. Similar to `extract_pdf_data.py`, there is no validation or sanitization of this path. A malicious user could provide a path traversal sequence (e.g., `../../../../var/www/html/index.html`) to write the generated HTML report to an arbitrary location on the filesystem, potentially defacing websites or overwriting critical files. Sanitize the `html_path` argument to prevent path traversal. Ensure the path is within an allowed directory and does not contain `..` sequences. A common approach is to use `os.path.abspath()` and `os.path.join()` with a base directory, and then validate that the resulting path is still within the intended base directory. | LLM | scripts/generate_report.py:400 | |
| HIGH | HTML Injection / Cross-Site Scripting (XSS) in Generated Report The `scripts/generate_report.py` script constructs an HTML report by embedding data from `analysis_output.json` (which is derived from user-provided transaction data) directly into an HTML template using f-strings. This data is not HTML-escaped before insertion, leading to potential Cross-Site Scripting (XSS) vulnerabilities:
1. **JavaScript Injection**: `category_labels` and `category_values` are inserted directly into JavaScript arrays. If a user-provided category name contains characters like `'` or `</script>`, it can break out of the JavaScript string context and execute arbitrary JavaScript code in the user's browser when the report is viewed.
2. **HTML Injection**: The `recommendations_html` is constructed by joining `f'<div class="recommendation">{rec}</div>'`. If a recommendation string (`rec`), which can include user-derived category names, contains HTML tags (e.g., `<img src=x onerror=alert(1)>`), these will be rendered directly, leading to XSS.
An attacker could craft specific transaction data (e.g., malicious category names or descriptions) to inject arbitrary HTML or JavaScript into the generated report. All user-derived data inserted into the HTML template, especially into JavaScript contexts or directly into HTML elements, must be properly HTML-escaped. For JavaScript contexts, ensure proper JSON serialization or JavaScript string escaping. For HTML elements, use an HTML escaping utility (e.g., `cgi.escape` or a dedicated templating engine's auto-escaping features) to convert special characters like `<`, `>`, `&`, `'`, `"` into their HTML entities. | LLM | scripts/generate_report.py:300 |
Scan History
Embed Code
[](https://skillshield.io/report/5b636bf537bb6a49)
Powered by SkillShield