Security Audit
ailabs-393/ai-labs-claude-skills:dist/skills/pitch-deck
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:dist/skills/pitch-deck received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 2 critical, 0 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via `grep` command in skill workflow, Potential Command Injection via `python3` script execution in skill workflow.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via `grep` command in skill workflow The skill's workflow, as described in `SKILL.md`, instructs the host LLM to execute a `grep` command. The command includes a placeholder `[Slide Number]. [Slide Name]` which is expected to be derived from user input or context. If this placeholder is populated directly from unsanitized user input, an attacker could inject shell metacharacters (e.g., `;`, `|`, `&`, `$(...)`) to execute arbitrary commands on the host system. This represents a critical command injection vulnerability. The LLM should avoid constructing and executing shell commands with unsanitized user input. Instead of relying on `grep`, the LLM should read the content of `references/pitch_deck_best_practices.md` and programmatically search its content within a safe execution environment. If shell execution is unavoidable, all user-provided strings used in shell commands must be strictly sanitized to prevent injection of metacharacters. | LLM | SKILL.md:65 | |
| CRITICAL | Potential Command Injection via `python3` script execution in skill workflow The skill's workflow, as described in `SKILL.md`, instructs the host LLM to execute a Python script (`scripts/create_pitch_deck.py`) using a shell command. The `output_filename.pptx` argument is expected to be user-controlled. If this filename is populated directly from unsanitized user input, an attacker could inject shell metacharacters (e.g., `;`, `|`, `&`, `$(...)`) to execute arbitrary commands on the host system. This represents a critical command injection vulnerability. The LLM should avoid constructing and executing shell commands with unsanitized user input. The `output_filename.pptx` should be strictly sanitized to prevent shell metacharacters. Ideally, the `create_pitch_deck.py` script should be invoked directly as a function or module within the LLM's execution environment, rather than via a shell command, to eliminate the shell injection surface. | LLM | SKILL.md:134 |
Scan History
Embed Code
[](https://skillshield.io/report/124e1026267ba276)
Powered by SkillShield