Security Audit
ailabs-393/ai-labs-claude-skills:packages/skills/storyboard-manager
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:packages/skills/storyboard-manager received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 3 high, 0 medium, and 0 low severity. Key findings include Arbitrary filesystem scanning via project_root manipulation, Arbitrary file content reading via `grep` argument manipulation, Arbitrary file write via `character-name` path traversal.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary filesystem scanning via project_root manipulation The skill's `SKILL.md` instructs the LLM to 'scan the project root' and provides examples of running Python scripts (`timeline_tracker.py`, `consistency_checker.py`) with a `project_root` argument (e.g., `.` for current directory). If a malicious user can manipulate the LLM (via prompt injection) to set the `project_root` to an arbitrary, sensitive directory (e.g., `/`, `/etc`, `~`), the Python scripts will recursively scan that entire directory for all `.md` files. While the scripts only target `.md` files and exclude hidden directories starting with `.`, this could still lead to the exfiltration of sensitive markdown documents or configuration files (e.g., `README.md` in various system directories, documentation, notes) from unintended locations on the host system. 1. **Restrict `project_root`**: Implement strict validation within the LLM's execution environment to ensure the `project_root` argument passed to the Python scripts is always a subdirectory of the skill's designated workspace or the current working directory, preventing path traversal to arbitrary system locations. 2. **Whitelist file types**: If possible, restrict the `scan_directory` function to only look for specific, expected markdown files (e.g., `summary.md`, `chapter-*.md`) rather than all `.md` files, or limit the depth of recursion. 3. **Isolate execution**: Run the Python scripts in a sandboxed environment with minimal filesystem access. | LLM | SKILL.md:60 | |
| HIGH | Arbitrary file content reading via `grep` argument manipulation The skill's `SKILL.md` provides an example of using the `Grep` tool: `grep -i "character arc" references/character_development.md`. If a malicious user can manipulate the LLM (via prompt injection) to change the file path argument (e.g., `references/character_development.md`) to an arbitrary, sensitive file on the host system (e.g., `/etc/passwd`, `/root/.ssh/id_rsa`), the LLM could be coerced into reading and potentially exfiltrating the content of that file. 1. **Restrict `grep` paths**: Implement strict validation within the LLM's execution environment to ensure that `grep` commands are only executed on files within the skill's designated workspace or a whitelisted set of reference files. 2. **Isolate execution**: Run `grep` in a sandboxed environment with minimal filesystem access. | LLM | SKILL.md:108 | |
| HIGH | Arbitrary file write via `character-name` path traversal The skill's `SKILL.md` instructs the LLM to "Write the character profile to `characters/[character-name].md`". If a malicious user can manipulate the LLM (via prompt injection) to craft a `character-name` that includes path traversal sequences (e.g., `../../../../etc/passwd`), the LLM could be coerced into writing or overwriting arbitrary files outside the intended `characters/` directory. This could lead to data corruption, denial of service, or even remote code execution if critical system files are overwritten. 1. **Sanitize `character-name`**: Implement strict validation and sanitization of the `character-name` input within the LLM's execution environment to prevent path traversal characters (e.g., `..`, `/`) from being included in the filename. 2. **Restrict write scope**: Ensure that file write operations are strictly confined to the skill's designated workspace or specific subdirectories, preventing writes to arbitrary system locations. | LLM | SKILL.md:147 |
Scan History
Embed Code
[](https://skillshield.io/report/81a5db470c870edc)
Powered by SkillShield