Security Audit
ailabs-393/ai-labs-claude-skills:dist/skills/data-analyst
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:dist/skills/data-analyst received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 11 findings: 3 critical, 1 high, 6 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Unpinned Python dependency version, Potential Command Injection via Unsanitized User Input in Skill Invocation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings11
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | dist/skills/data-analyst/SKILL.md:98 | |
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | dist/skills/data-analyst/scripts/create_dashboard.py:394 | |
| CRITICAL | Potential Command Injection via Unsanitized User Input in Skill Invocation The `SKILL.md` instructs the host LLM to execute `python3` scripts with arguments directly derived from user input (e.g., `<input_file.csv>`, `<output_analysis.json>`, `<output_dir>`, `<port>`). If the LLM does not sanitize these user-provided arguments before constructing the shell command, a malicious user could inject arbitrary shell commands. For example, providing `"; rm -rf / --no-preserve-root #"` as an input filename could lead to critical system compromise. The host LLM must implement robust sanitization and validation of all user-provided arguments before constructing and executing shell commands. Arguments should be properly quoted and escaped to prevent shell metacharacter interpretation. For file paths, ensure they are within an allowed, sandboxed directory. | LLM | SKILL.md:40 | |
| HIGH | Arbitrary File Read/Write via User-Controlled File Paths The Python scripts (`analyze_missing_values.py`, `impute_missing_values.py`, `create_dashboard.py`) accept file paths (e.g., `filepath`, `output_json`, `output_file`, `output_dir`) directly from command-line arguments. These paths are then used in functions like `pd.read_csv()`, `open()`, `json.dump()`, `df.to_csv()`, and `fig.write_html()`. A malicious user could provide paths to sensitive system files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`) to read their contents, or specify paths to overwrite critical system files or create files in arbitrary locations. Implement strict path validation. All file paths provided by the user should be canonicalized and checked to ensure they reside within an allowed, sandboxed directory (e.g., a temporary directory or a user-specific workspace). Prevent directory traversal attacks (e.g., `../`). | LLM | scripts/analyze_missing_values.py:49 | |
| MEDIUM | Unpinned Python dependency version Requirement 'pandas>=2.0.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | dist/skills/data-analyst/requirements.txt:1 | |
| MEDIUM | Unpinned Python dependency version Requirement 'numpy>=1.24.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | dist/skills/data-analyst/requirements.txt:2 | |
| MEDIUM | Unpinned Python dependency version Requirement 'scikit-learn>=1.3.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | dist/skills/data-analyst/requirements.txt:3 | |
| MEDIUM | Unpinned Python dependency version Requirement 'plotly>=5.18.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | dist/skills/data-analyst/requirements.txt:4 | |
| MEDIUM | Unpinned Python dependency version Requirement 'dash>=2.14.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | dist/skills/data-analyst/requirements.txt:5 | |
| MEDIUM | Unpinned Python dependency version Requirement 'dash-bootstrap-components>=1.5.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | dist/skills/data-analyst/requirements.txt:6 | |
| LOW | Unpinned Dependencies in `requirements.txt` The `requirements.txt` file specifies dependencies using `>=` (greater than or equal to) version specifiers. This practice can lead to non-deterministic builds, where different versions of dependencies might be installed at different times. This could introduce unexpected breaking changes, compatibility issues, or even security vulnerabilities if a new major version of a dependency contains flaws. Pin exact versions for all dependencies (e.g., `pandas==2.0.0`). Use a lock file mechanism (e.g., `pip freeze > requirements.lock`) or tools like `Poetry` or `Pipenv` for more robust dependency management. Regularly audit and update dependencies. | LLM | requirements.txt:1 |
Scan History
Embed Code
[](https://skillshield.io/report/9e25ef0621f3cd89)
Powered by SkillShield