Security Audit
ailabs-393/ai-labs-claude-skills:dist/skills/business-analytics-reporter
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:dist/skills/business-analytics-reporter received a trust score of 55/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Arbitrary File Write via Unsanitized Path, Arbitrary File Read via Unsanitized Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary File Write via Unsanitized Path The `analyze_business_data.py` script takes an `output_json_path` directly from command-line arguments (`sys.argv[2]`) without any sanitization or validation. A malicious actor could provide a path like `/etc/cron.d/malicious_job` or `/root/.ssh/authorized_keys` to write arbitrary content to sensitive system locations, leading to command injection, privilege escalation, or system compromise. Implement strict path validation for `output_json_path`. Ensure the path is within an allowed, non-sensitive directory (e.g., a temporary sandbox or a user-specific data directory). Use `pathlib.Path.resolve()` and verify that the resolved path is a child of an allowed base directory. Alternatively, the skill should only write to a pre-defined, secure temporary location. | Static | scripts/analyze_business_data.py:204 | |
| HIGH | Arbitrary File Read via Unsanitized Path The `analyze_business_data.py` script takes an `input_csv_path` directly from command-line arguments (`sys.argv[1]`) without any sanitization or validation. A malicious actor could provide a path like `/etc/passwd`, `/app/secrets.txt`, or other sensitive files. Although `pd.read_csv` might fail if the file is not a valid CSV, the attempt to read arbitrary files constitutes a data exfiltration risk. Error messages or partial reads could expose sensitive information. Implement strict path validation for `input_csv_path`. Ensure the path is within an allowed, user-provided data directory. Use `pathlib.Path.resolve()` and verify that the resolved path is a child of an allowed base directory. Ideally, the skill should receive the content of the CSV directly rather than a file path, or operate within a highly restricted file system sandbox. | Static | scripts/analyze_business_data.py:203 |
Scan History
Embed Code
[](https://skillshield.io/report/4bfdd1f6d79edd84)
Powered by SkillShield