Trust Assessment
storyboard-generator received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via 'open' command, Stored Cross-Site Scripting (XSS) vulnerability in generated HTML output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via 'open' command The skill explicitly instructs the agent to execute a shell command `open {文件路径}` to display the generated HTML. The `{文件路径}` is constructed using `{项目名}` and `{集数}`. If these variables are derived from untrusted user input, an attacker could inject malicious shell commands into the filename, leading to arbitrary code execution on the host system. For example, injecting `"; rm -rf /; #` into `{项目名}` could lead to data loss or other system compromise. Avoid direct shell command execution with unsanitized user-controlled input. If opening a file is necessary, use a safe API that handles file paths securely and does not interpret them as shell commands. Ensure all components of the file path are strictly validated and sanitized to prevent injection. | LLM | SKILL.md:230 | |
| HIGH | Stored Cross-Site Scripting (XSS) vulnerability in generated HTML output The skill generates an HTML file (`{项目名}_分镜展示_第{集数}集.html`) by replacing various placeholders (`{{PROJECT_NAME}}`, `{{EPISODE_TITLE}}`, `{{NAV_ITEMS}}`, `{{GALLERY_SECTIONS}}`, `{{SHOT_DATA_JSON}}`). The content for these placeholders, especially `{{NAV_ITEMS}}`, `{{GALLERY_SECTIONS}}`, and fields within `{{SHOT_DATA_JSON}}` (like `title`, `info`, `moment`, `prompt`), is derived from user-provided input (script, character cards, etc.) and LLM-generated text. There is no explicit mention of sanitization or escaping of this content before it's embedded into the HTML. An attacker could inject malicious HTML or JavaScript into their input, which would then be stored in the generated HTML file. When a user opens this file, the malicious script would execute in their browser, potentially leading to data exfiltration, session hijacking, or further prompt injection attempts if the browser environment can interact with the LLM. All user-controlled and LLM-generated content intended for HTML output must be properly HTML-escaped before insertion into the template. For JSON data embedded in `<script>` tags, ensure it is correctly JSON-encoded and then HTML-escaped if it's within an HTML context, or directly JSON-encoded if it's within a JavaScript block. | LLM | SKILL.md:190 |
Scan History
Embed Code
[](https://skillshield.io/report/488bdc985fc644fa)
Powered by SkillShield