Security Audit
dkyazzentwatwa/chatgpt-skills:dependency-analyzer
github.com/dkyazzentwatwa/chatgpt-skillsTrust Assessment
dkyazzentwatwa/chatgpt-skills:dependency-analyzer received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Arbitrary File Write Capability, Broad Filesystem Read Access.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 24, 2026 (commit d4bad335). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Arbitrary File Write Capability The `save_requirements` method allows writing to an arbitrary file path specified by the `output` argument. If an attacker can control this argument, they could potentially overwrite critical system files or other sensitive data with the generated requirements list, leading to data integrity issues or denial of service. Implement path validation and sanitization for the `output` argument to ensure it only writes to allowed directories (e.g., within a designated output folder or temporary directory). Consider adding a confirmation step for sensitive paths if used in an interactive context. | Static | scripts/dependency_analyzer.py:288 | |
| INFO | Broad Filesystem Read Access The skill's core functionality involves analyzing Python files within a specified directory and its subdirectories (`analyze_file`, `analyze_project`, `find_unused_imports`, `find_circular_imports`). This grants broad read access to the filesystem within the scope of the provided paths. While necessary for its intended purpose, it means the skill could be prompted to read sensitive code or configuration files if given an inappropriate directory by the LLM or a malicious user. Ensure the LLM's execution environment is sandboxed and that the agent is only granted access to directories strictly necessary for its operation. When invoking the skill, validate and restrict the `filepath` and `directory` arguments to prevent access to sensitive areas. | Static | scripts/dependency_analyzer.py:100 |
Scan History
Embed Code
[](https://skillshield.io/report/1795011a2260e5ec)
Powered by SkillShield