Security Audit
ailabs-393/ai-labs-claude-skills:dist/skills/brand-analyzer
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:dist/skills/brand-analyzer received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include LLM instructed to write files with user-controlled filename, potential for path traversal and arbitrary file write., LLM instructed to read files, potential for arbitrary file read via prompt injection..
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | LLM instructed to write files with user-controlled filename, potential for path traversal and arbitrary file write. The skill instructs the LLM to create output documents (e.g., `brand-guidelines-BRANDNAME-YYYY-MM-DD.md`) and save them to the 'project root or in `brand-documents/` directory'. The `BRANDNAME` portion of the filename is derived from user input. Without explicit sanitization, a malicious user could inject path traversal sequences (e.g., `../../`) into the `BRANDNAME`, tricking the LLM into writing files to arbitrary locations on the filesystem. This could lead to overwriting critical system files, exfiltrating sensitive data by writing it to a publicly accessible directory, or creating malicious files. This implies the LLM has write access to the filesystem, which is an excessive permission if not properly sandboxed and validated. Implement strict sanitization and validation for the `BRANDNAME` and any other user-controlled input used in file paths or names. Ensure that the LLM's write operations are strictly confined to a designated, sandboxed output directory and that path traversal sequences are explicitly blocked or removed. The LLM should not be able to write to arbitrary locations, especially outside its designated working directory. | LLM | SKILL.md:128 | |
| HIGH | LLM instructed to read files, potential for arbitrary file read via prompt injection. The skill instructs the LLM to load content from specific markdown files within the `references/` and `assets/` directories. While the instructions specify particular files, an attacker could craft a prompt injection to manipulate the LLM into attempting to read arbitrary files on the host system (e.g., `/etc/passwd`, environment variables, or other sensitive files) by subverting the LLM's file loading mechanism. This could lead to data exfiltration. Implement strict input validation and sanitization for any file paths or names derived from user input. Ensure the LLM's file read access is sandboxed and restricted to only the explicitly allowed directories and file types. Avoid allowing the LLM to construct file paths directly from untrusted input without validation. | LLM | SKILL.md:67 |
Scan History
Embed Code
[](https://skillshield.io/report/6d2119fff96c5d0e)
Powered by SkillShield