Security Audit
ailabs-393/ai-labs-claude-skills:packages/skills/business-analytics-reporter
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:packages/skills/business-analytics-reporter received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Arbitrary file read via user-controlled CSV path, Arbitrary file write via user-controlled output path, Potential command injection via unsanitized arguments to `python` script.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary file write via user-controlled output path The `scripts/analyze_business_data.py` script takes the output JSON file path directly from command-line arguments (`sys.argv[2]`) without any validation or sanitization. If an attacker can control this `output_path` (e.g., through prompt injection to the LLM that invokes the script), they could write arbitrary JSON content to any location on the filesystem where the process has write permissions. This could lead to overwriting critical system files, creating malicious configuration files (e.g., cron jobs, web server configs), or planting web shells, effectively leading to command injection or denial of service. Implement strict path validation and sanitization for `output_path`. Restrict file writing to a designated, isolated output directory. Do not allow absolute paths or path traversal (`../`). Ensure the output directory has appropriate permissions. | LLM | scripts/analyze_business_data.py:284 | |
| HIGH | Arbitrary file read via user-controlled CSV path The `scripts/analyze_business_data.py` script takes the input CSV file path directly from command-line arguments (`sys.argv[1]`) without any validation or sanitization. If an attacker can control this `csv_path` (e.g., through prompt injection to the LLM that invokes the script), they could specify paths to sensitive files on the system (e.g., `/etc/passwd`, `/app/secrets/api_key.txt`). Although `pd.read_csv` might fail on non-CSV files, the attempt to read and potentially process parts of arbitrary files constitutes a data exfiltration risk. Implement strict path validation and sanitization for `csv_path`. Restrict file access to a designated, isolated directory (e.g., a temporary sandbox). Do not allow absolute paths or path traversal (`../`). Only allow reading files uploaded by the user within that sandbox. | LLM | scripts/analyze_business_data.py:19 | |
| HIGH | Potential command injection via unsanitized arguments to `python` script The `SKILL.md` explicitly instructs the LLM to execute the Python script using a shell command: `python scripts/analyze_business_data.py path/to/business_data.csv output_report.json`. If `path/to/business_data.csv` or `output_report.json` are constructed from untrusted user input without proper shell escaping or sanitization, an attacker could inject arbitrary shell commands. For example, if `path/to/business_data.csv` is `'; rm -rf /; #'` or `output_report.json` is `'; malicious_command; #'`, the shell could execute the injected commands. While the Python script itself treats these as file paths, the *invocation* of the script is vulnerable. The LLM orchestrator should ensure that any arguments passed to shell commands derived from user input are strictly sanitized and shell-escaped. Ideally, avoid direct shell execution with user-controlled arguments; instead, pass arguments directly to the Python interpreter or use a more secure execution environment (e.g., a dedicated API call for script execution). | LLM | SKILL.md:49 |
Scan History
Embed Code
[](https://skillshield.io/report/6c7135eb27783345)
Powered by SkillShield