Security Audit
dkyazzentwatwa/chatgpt-skills:certificate-generator
github.com/dkyazzentwatwa/chatgpt-skillsTrust Assessment
dkyazzentwatwa/chatgpt-skills:certificate-generator received a trust score of 51/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 1 high, 2 medium, and 1 low severity. Key findings include Unpinned Python dependency version, Arbitrary file write via `save()` method, Arbitrary file write via CSV data in `batch_generate()` filename pattern.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 53/100, indicating areas for improvement.
Last analyzed on February 24, 2026 (commit d4bad335). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary file write via CSV data in `batch_generate()` filename pattern The `CertificateGenerator.batch_generate()` method constructs output filenames using `filename_pattern.format(**row_data)`, where `row_data` is populated directly from an untrusted CSV file. If an attacker provides a malicious CSV where fields like `name`, `course`, or `certificate_id` contain path traversal sequences (e.g., `../../`), these sequences will be interpolated into the `filename_pattern`. This allows writing generated PDFs to arbitrary locations on the filesystem, potentially leading to system compromise, data destruction, or privilege escalation. Before formatting the `filename_pattern`, sanitize all values in `row_data` that will be used in the pattern. Specifically, remove or escape any path separators (`/`, `\`) and path traversal sequences (`..`, `../`). A robust approach would be to only allow alphanumeric characters, hyphens, and underscores in such fields, or to use `Path(filename).name` after formatting to strip any directory components. | LLM | scripts/certificate_gen.py:445 | |
| HIGH | Arbitrary file write via `save()` method The `CertificateGenerator.save()` method directly uses the provided `filename` argument to write the generated PDF. If this `filename` is controlled by untrusted input (e.g., via CLI arguments or other user input), an attacker can use path traversal sequences (e.g., `../../`) to write files to arbitrary locations on the filesystem. This could lead to overwriting critical system files, placing malicious scripts, or other forms of system compromise. Sanitize the `filename` argument to prevent path traversal. Ensure it only contains a base filename and extension, or resolve it against a secure base directory. For example, use `Path(secure_output_dir) / Path(filename).name` to ensure the filename is just the base name and does not contain directory components. | LLM | scripts/certificate_gen.py:400 | |
| MEDIUM | Unpinned Python dependency version Requirement 'reportlab>=4.0.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | certificate-generator/scripts/requirements.txt:1 | |
| MEDIUM | Unpinned Python dependency version Requirement 'Pillow>=10.0.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | certificate-generator/scripts/requirements.txt:2 | |
| LOW | Arbitrary file read attempt in image loading functions The `set_logo()` and `add_signature()` methods accept file paths for images (`_logo_path`, `signature_image`). If an attacker can control these paths (e.g., via CLI arguments or other user input), they could attempt to make the application load arbitrary files (e.g., `/etc/passwd`) as images. While `reportlab.platypus.Image` is designed for image formats and is unlikely to directly exfiltrate non-image content, it could lead to error messages revealing file existence or partial content, or cause a denial of service if it attempts to process a very large or malformed non-image file. Validate image paths to ensure they are within expected, safe directories or conform to strict naming conventions. Consider using a library that performs more robust image validation and sanitization before passing paths to `reportlab.Image`. If possible, restrict image loading to a specific, sandboxed directory. | LLM | scripts/certificate_gen.py:370 |
Scan History
Embed Code
[](https://skillshield.io/report/0910046615ad590f)
Powered by SkillShield