Security Audit
ailabs-393/ai-labs-claude-skills:packages/skills/business-document-generator
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:packages/skills/business-document-generator received a trust score of 55/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 3 high, 0 medium, and 0 low severity. Key findings include Unpinned Python Dependencies, Arbitrary File Read via User-Controlled Path, Arbitrary File Write via User-Controlled Paths.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned Python Dependencies The `SKILL.md` instructs users to install Python packages (`pypdf`, `reportlab`) without specifying version numbers. This can lead to supply chain vulnerabilities if a malicious version of a dependency is published, or to unexpected behavior due to breaking changes in newer versions. It also makes builds non-deterministic. Pin dependency versions in a `requirements.txt` file (e.g., `pypdf==3.17.4`, `reportlab==4.0.8`) and instruct users to install using `pip install -r requirements.txt`. | Static | SKILL.md:70 | |
| HIGH | Arbitrary File Read via User-Controlled Path The `scripts/generate_document.py` script takes a `data_file` argument, which is directly used in `json.load(f)` after `open(data_file, 'r')`. If an attacker can control the `data_file` argument passed to the script, they can make the script attempt to read any file on the system that the script's execution context has permissions for. While `json.load` might fail if the file is not valid JSON, the content of the file is still read into memory, posing a data exfiltration risk. Implement strict validation and sanitization for the `data_file` argument. Restrict file access to a designated, non-sensitive directory (e.g., a temporary user-specific upload directory) and ensure the path does not contain directory traversal sequences (e.g., `../`). | Static | scripts/generate_document.py:50 | |
| HIGH | Arbitrary File Write via User-Controlled Paths The `scripts/generate_document.py` script constructs the output file path using `self.output_dir / output_filename`. Both `output_dir` and `output_filename` are derived from command-line arguments. If an attacker can control these arguments, they can specify an arbitrary directory and filename, allowing the script to write a PDF file to any location on the filesystem that the script's execution context has permissions for. This could lead to overwriting critical system files, writing malicious content to web server directories, or filling up disk space. Implement strict validation and sanitization for `output_dir` and `output_filename`. Restrict `output_dir` to a designated, non-sensitive output directory (e.g., a temporary user-specific directory) and ensure paths do not contain directory traversal sequences (e.g., `../`). Consider using a UUID for filenames to prevent overwrites and ensure uniqueness. | Static | scripts/generate_document.py:77 |
Scan History
Embed Code
[](https://skillshield.io/report/dae17f4028319353)
Powered by SkillShield