Security Audit
dkyazzentwatwa/chatgpt-skills:api-response-mocker
github.com/dkyazzentwatwa/chatgpt-skillsTrust Assessment
dkyazzentwatwa/chatgpt-skills:api-response-mocker received a trust score of 63/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unpinned Python dependency version, Arbitrary File Read via Schema Path, Arbitrary File Write via Output Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 24, 2026 (commit d4bad335). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary File Read via Schema Path The `from_schema_file` method in `APIMocker` reads a JSON schema from a file path provided directly by the user via the `--schema` CLI argument. An attacker could specify an arbitrary file path (e.g., `/etc/passwd`, `~/.ssh/id_rsa`) to attempt to read sensitive system files. Although `json.load` would likely fail on non-JSON files, the attempt to read and potentially expose the content of a sensitive file that happens to be valid JSON (or partially valid before an error) constitutes a data exfiltration risk. Implement strict validation and sanitization for file paths provided by user input. Consider restricting file operations to a designated, isolated directory or using a file picker interface. If direct file path input is necessary, validate that the path is within an allowed directory and does not contain directory traversal sequences (e.g., `../`). | Static | scripts/api_mocker.py:137 | |
| HIGH | Arbitrary File Write via Output Path The `save` method in `APIMocker` writes generated data to a file path provided directly by the user via the `--output` CLI argument. An attacker could specify an arbitrary file path on the system (e.g., `/etc/cron.d/malicious_job`, `/var/www/html/malicious.php`) to write data generated by the mocker. While the generated data is fake and not inherently malicious code, overwriting critical system files or filling disk space can lead to denial of service, system instability, or create a foothold for further attacks if the output is placed in an executable context. Implement strict validation and sanitization for file paths provided by user input. Restrict file write operations to a designated, isolated output directory. Prevent directory traversal sequences (e.g., `../`) in the provided file path. Consider adding a confirmation step before overwriting existing files. | Static | scripts/api_mocker.py:153 | |
| MEDIUM | Unpinned Python dependency version Requirement 'faker>=22.0.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | api-response-mocker/scripts/requirements.txt:1 |
Scan History
Embed Code
[](https://skillshield.io/report/0dc6c8091046d065)
Powered by SkillShield