Trust Assessment
chaos-lab received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 0 critical, 0 high, 4 medium, and 1 low severity. Key findings include Suspicious import: requests, Potential Data Exfiltration via LLM API, Unpinned Dependency in Installation Instructions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/jbbottoms/chaos-lab/scripts/run-duo.py:9 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/jbbottoms/chaos-lab/scripts/run-trio.py:9 | |
| MEDIUM | Potential Data Exfiltration via LLM API The skill reads all files recursively from the `/tmp/chaos-sandbox` directory and sends their content directly to the external Gemini API as part of the user prompt. While the skill's documentation states this directory is for 'dummy data' and 'custom scenarios' including 'sensitive configs' or 'intentional vulnerabilities', there is no programmatic enforcement to prevent actual sensitive user data from being placed there. If a user inadvertently places real sensitive files in this directory, their contents will be exfiltrated to the Gemini API. Implement stricter filtering or sanitization of files read from the sandbox, or provide clearer warnings to users about the type of data that should be placed in `/tmp/chaos-sandbox`. Consider hashing or redacting potentially sensitive patterns before sending to the LLM, or explicitly listing allowed file types/extensions. | LLM | scripts/run-duo.py:50 | |
| MEDIUM | Potential Data Exfiltration via LLM API The skill reads all files recursively from the `/tmp/chaos-sandbox` directory and sends their content directly to the external Gemini API as part of the user prompt. While the skill's documentation states this directory is for 'dummy data' and 'custom scenarios' including 'sensitive configs' or 'intentional vulnerabilities', there is no programmatic enforcement to prevent actual sensitive user data from being placed there. If a user inadvertently places real sensitive files in this directory, their contents will be exfiltrated to the Gemini API. Implement stricter filtering or sanitization of files read from the sandbox, or provide clearer warnings to users about the type of data that should be placed in `/tmp/chaos-sandbox`. Consider hashing or redacting potentially sensitive patterns before sending to the LLM, or explicitly listing allowed file types/extensions. | LLM | scripts/run-trio.py:80 | |
| LOW | Unpinned Dependency in Installation Instructions The installation instructions in `SKILL.md` recommend installing the `requests` library without specifying a version. This practice can lead to supply chain vulnerabilities if a malicious version of the package is published or if future versions introduce breaking changes or security flaws. An attacker could potentially compromise the `requests` package or its dependencies to inject malicious code. Pin the dependency to a specific, known-good version (e.g., `pip3 install requests==2.28.1`) to ensure consistent and secure installations. Regularly review and update pinned dependencies. | LLM | SKILL.md:49 |
Scan History
Embed Code
[](https://skillshield.io/report/435469c2451b6ec1)
Powered by SkillShield