Trust Assessment
frontend-slides received a trust score of 35/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 1 medium, and 1 low severity. Key findings include OS Command Injection via file opener utilities, Potential OS Command Injection via `python3` execution, Potential Supply Chain Risk and Command Injection via package installation suggestion.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 46/100, indicating areas for improvement.
Last analyzed on March 20, 2026 (commit 9a478ad6). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | OS Command Injection via file opener utilities The skill explicitly instructs the host LLM to use OS-specific commands (`open`, `xdg-open`, `start`) to open the generated HTML file. If the filename (`file.html`) is derived from untrusted user input without proper sanitization, an attacker could inject arbitrary shell commands (e.g., `file.html; rm -rf /`) leading to arbitrary code execution on the host system. The LLM should never directly execute shell commands constructed with untrusted input. If file opening is necessary, it must be done through a secure, sandboxed API that does not allow arbitrary command execution, or the filename must be strictly validated and sanitized to prevent injection. | LLM | SKILL.md:150 | |
| HIGH | Potential OS Command Injection via `python3` execution The skill instructs the host LLM to use `python3` with `python-pptx` for PowerPoint conversion. If the Python script's content, its arguments, or the path to the PPT/PPTX file are influenced by untrusted user input without sanitization, it could lead to arbitrary code execution on the host system. All arguments passed to `python3` and any Python script executed must be strictly controlled and sanitized. Ideally, the LLM should not directly execute external programs based on untrusted input. If `python-pptx` functionality is required, it should be integrated via a secure API or a pre-defined, sandboxed script with strictly validated inputs. | LLM | SKILL.md:157 | |
| MEDIUM | Potential Supply Chain Risk and Command Injection via package installation suggestion The skill suggests installing `python-pptx` if unavailable. If the host LLM is instructed to perform this installation and the package name or installation command can be influenced by untrusted user input, it could lead to installing malicious packages (typosquatting) or executing arbitrary commands during the installation process. The LLM should never be allowed to install packages based on untrusted input. If a dependency is required, it should be pre-installed in the environment, or the user should be explicitly instructed to install it manually. If the LLM is to suggest an installation command, the package name must be hardcoded and not derived from user input. | LLM | SKILL.md:159 | |
| LOW | Potential Directory Traversal via dynamic presentation filename The skill instructs the host LLM to output `[presentation-name].html`. If `presentation-name` is directly derived from untrusted user input without sanitization, an attacker could use directory traversal sequences (e.g., `../../evil.html`) to write files outside the intended output directory. All filenames derived from untrusted input must be strictly sanitized to remove path separators and other special characters, or confined to a specific output directory to prevent directory traversal. | LLM | SKILL.md:79 |
Scan History
Embed Code
[](https://skillshield.io/report/984aa8940d95279a)
Powered by SkillShield