Security Audit
ailabs-393/ai-labs-claude-skills:dist/skills/finance-manager
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:dist/skills/finance-manager received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 0 critical, 5 high, 0 medium, and 1 low severity. Key findings include Arbitrary File Read via User-Controlled Input Paths, Arbitrary File Write via User-Controlled Output Paths, Client-Side HTML/JavaScript Injection (XSS) in Generated Report.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 23/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary File Read via User-Controlled Input Paths The `extract_pdf_data.py` and `analyze_finances.py` scripts take file paths as command-line arguments. If these arguments are derived from untrusted user input without proper validation or sandboxing, an attacker could specify paths to sensitive system files (e.g., `/etc/passwd`, `/app/secrets.txt`) to read their contents, leading to data exfiltration. Implement strict input validation for file paths, ensuring they are within an allowed, isolated directory and do not contain path traversal sequences (e.g., `../`). Consider using a sandboxed environment or a dedicated file storage service for user-provided files. | LLM | scripts/extract_pdf_data.py:20 | |
| HIGH | Arbitrary File Read via User-Controlled Input Paths The `extract_pdf_data.py` and `analyze_finances.py` scripts take file paths as command-line arguments. If these arguments are derived from untrusted user input without proper validation or sandboxing, an attacker could specify paths to sensitive system files (e.g., `/etc/passwd`, `/app/secrets.txt`) to read their contents, leading to data exfiltration. Implement strict input validation for file paths, ensuring they are within an allowed, isolated directory and do not contain path traversal sequences (e.g., `../`). Consider using a sandboxed environment or a dedicated file storage service for user-provided files. | LLM | scripts/analyze_finances.py:17 | |
| HIGH | Arbitrary File Write via User-Controlled Output Paths The `extract_pdf_data.py` and `generate_report.py` scripts write output to file paths specified as command-line arguments. If these arguments are derived from untrusted user input without proper validation, an attacker could specify paths to arbitrary locations on the filesystem. This could lead to overwriting critical system files, injecting malicious scripts into system directories, or filling up disk space to cause a Denial of Service. Implement strict input validation for output file paths, ensuring they are within an allowed, isolated directory and do not contain path traversal sequences. Prevent writing to sensitive system directories. | LLM | scripts/extract_pdf_data.py:60 | |
| HIGH | Arbitrary File Write via User-Controlled Output Paths The `extract_pdf_data.py` and `generate_report.py` scripts write output to file paths specified as command-line arguments. If these arguments are derived from untrusted user input without proper validation, an attacker could specify paths to arbitrary locations on the filesystem. This could lead to overwriting critical system files, injecting malicious scripts into system directories, or filling up disk space to cause a Denial of Service. Implement strict input validation for output file paths, ensuring they are within an allowed, isolated directory and do not contain path traversal sequences. Prevent writing to sensitive system directories. | LLM | scripts/generate_report.py:19 | |
| HIGH | Client-Side HTML/JavaScript Injection (XSS) in Generated Report The `generate_report.py` script constructs an HTML report by directly embedding data, such as `category_labels`, into JavaScript code within the `HTML_TEMPLATE`. If untrusted user input (e.g., a malicious category name from a transaction CSV/JSON) contains JavaScript code or HTML tags, it will be directly interpolated into the generated HTML report without proper escaping. When a user views this report in a browser, the injected code will execute, leading to a Cross-Site Scripting (XSS) vulnerability. All user-controlled data embedded into HTML or JavaScript contexts must be properly escaped. For JavaScript contexts, use a JavaScript-specific escaping function (e.g., `json.dumps` for embedding into JS literals, or `encodeURIComponent` if embedding into URLs). For HTML contexts, use HTML entity encoding. | LLM | scripts/generate_report.py:20 | |
| LOW | Skill Returns Raw Input, Potential Data Exfiltration The placeholder `index.js` skill returns the entire `input` object directly. If the `input` to the skill contains sensitive user data, and the LLM's output is logged, stored, or exposed to an unauthorized party, this could lead to unintended data exfiltration. While this is a placeholder, it highlights a pattern that could become a vulnerability if sensitive data is passed to the skill and then echoed back. Ensure that skills only return necessary and non-sensitive information. If sensitive data is processed, it should be redacted or transformed before being returned or logged. | LLM | index.js:6 |
Scan History
Embed Code
[](https://skillshield.io/report/84614b52109dab19)
Powered by SkillShield