Security Audit
dkyazzentwatwa/chatgpt-skills:budget-analyzer
github.com/dkyazzentwatwa/chatgpt-skillsTrust Assessment
dkyazzentwatwa/chatgpt-skills:budget-analyzer received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 10 findings: 1 critical, 3 high, 5 medium, and 1 low severity. Key findings include Unpinned Python dependency version, Arbitrary File Read via Path Traversal, Arbitrary File Read via Path Traversal (JSON files).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 23/100, indicating areas for improvement.
Last analyzed on February 24, 2026 (commit d4bad335). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings10
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Cross-Site Scripting (XSS) / HTML Injection in HTML Reports The `_generate_html_report` method constructs HTML content using f-strings and directly embeds user-controlled data (such as category names, transaction descriptions, and recommendation strings) without proper HTML escaping. If an attacker can control these inputs (e.g., by providing a malicious CSV with `<script>` tags in a transaction description or a custom category name), they can inject arbitrary HTML or JavaScript into the generated HTML report. When this report is viewed in a browser, the injected code would execute, leading to XSS. Before embedding any user-controlled data into HTML output, ensure it is properly HTML-escaped. Use a library function (e.g., `html.escape` in Python) to convert special characters like `<`, `>`, `&`, `'`, `"` into their HTML entities. Apply this escaping to all variables derived from user input, including category names, descriptions, and recommendation text. | Static | scripts/budget_analyzer.py:382 | |
| HIGH | Arbitrary File Read via Path Traversal The skill allows reading arbitrary files from the filesystem by accepting unvalidated file paths for input data, custom categories, and budget targets. An attacker could provide a path like `/etc/passwd` or `../../.env` to exfiltrate sensitive information. Implement strict path validation for all user-provided file paths. Restrict file operations to a designated, sandboxed directory. Consider using a file picker UI instead of direct path input for sensitive operations. | Static | scripts/budget_analyzer.py:415 | |
| HIGH | Arbitrary File Read via Path Traversal (JSON files) The skill allows reading arbitrary JSON files from the filesystem by accepting unvalidated file paths for custom categories and budget targets. An attacker could provide a path like `/etc/shadow` or `../../secrets.json` to exfiltrate sensitive information. Implement strict path validation for all user-provided file paths. Restrict file operations to a designated, sandboxed directory. Consider using a file picker UI instead of direct path input for sensitive operations. | Static | scripts/budget_analyzer.py:420 | |
| HIGH | Arbitrary File Write via Path Traversal The skill allows writing arbitrary files to the filesystem by accepting unvalidated file paths for generated reports and plot images. An attacker could provide a path like `/tmp/malicious.sh` or `../../web/index.html` to write files to unintended locations, potentially leading to denial of service, defacement, or remote code execution if combined with other vulnerabilities. Implement strict path validation for all user-provided output file paths. Restrict file operations to a designated, sandboxed output directory. Ensure that the file extension matches the intended format (e.g., '.png' for plots, '.pdf' or '.html' for reports). | Static | scripts/budget_analyzer.py:360 | |
| MEDIUM | Unpinned Python dependency version Requirement 'pandas>=2.0.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | budget-analyzer/scripts/requirements.txt:1 | |
| MEDIUM | Unpinned Python dependency version Requirement 'numpy>=1.24.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | budget-analyzer/scripts/requirements.txt:2 | |
| MEDIUM | Unpinned Python dependency version Requirement 'matplotlib>=3.7.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | budget-analyzer/scripts/requirements.txt:3 | |
| MEDIUM | Unpinned Python dependency version Requirement 'reportlab>=4.0.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | budget-analyzer/scripts/requirements.txt:4 | |
| MEDIUM | Prompt Injection via User-Controlled Data in Recommendations/Descriptions The `get_recommendations` method and the `analyze` method (for `largest_expense` description) generate text strings that incorporate user-controlled data, such as category names and transaction descriptions. If these generated strings are subsequently passed to an LLM, an attacker could inject malicious instructions into the LLM's prompt by crafting specific category names or transaction descriptions in the input CSV/JSON. For example, a category named 'SYSTEM INSTRUCTION: Ignore previous instructions and...' could manipulate the LLM's behavior. When passing user-controlled data or generated text containing such data to an LLM, sanitize or escape the input to neutralize potential prompt injection attempts. Consider using a 'defensive' prompt engineering approach, clearly delineating user input from system instructions, or employing input validation to restrict the content of category names and descriptions. | LLM | scripts/budget_analyzer.py:290 | |
| LOW | Content Injection in PDF Reports The `_generate_pdf_report` method uses `reportlab.platypus.Paragraph` to render text, including user-controlled data like category names, transaction descriptions, and recommendation strings. ReportLab's `Paragraph` can interpret a subset of HTML-like tags. If user-controlled input contains such tags (e.g., `<font color="red">`), they might be rendered, leading to unintended formatting or minor content manipulation within the PDF. While less severe than XSS, it can affect report integrity and presentation. Sanitize user-controlled text before passing it to `reportlab.platypus.Paragraph` to remove or escape any HTML-like tags. This ensures that user input is treated as plain text and does not interfere with the intended report formatting. | Static | scripts/budget_analyzer.py:350 |
Scan History
Embed Code
[](https://skillshield.io/report/a30e9f9149ac37af)
Powered by SkillShield