Trust Assessment
Excalidraw Flowchart received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via --inline argument, Potential Arbitrary File Write via -o argument, Unpinned external dependency @swiftlysingh/excalidraw-cli.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via --inline argument The skill instructs the LLM to execute `npx @swiftlysingh/excalidraw-cli` with user-provided DSL passed via the `--inline` argument. If the LLM does not properly sanitize or escape the user's input before constructing the shell command, a malicious user could inject arbitrary shell commands. For example, `"; rm -rf /; echo "` could be executed if not escaped, leading to arbitrary code execution. The LLM should strictly sanitize and escape all user-provided input before incorporating it into shell commands. Specifically, the DSL string passed to `--inline` must be properly quoted and escaped to prevent shell metacharacter interpretation. Consider using a dedicated library for shell command construction that handles escaping automatically. | LLM | SKILL.md:70 | |
| HIGH | Potential Arbitrary File Write via -o argument The skill instructs the LLM to use the `-o` argument to specify an output file path for the generated Excalidraw diagram. If the LLM allows a user to control this argument, a malicious user could specify an arbitrary file path (e.g., `/etc/passwd`, `~/.bashrc`) to overwrite or create files in sensitive locations on the filesystem, leading to data corruption or system compromise. The LLM should restrict the output file path to a safe, sandboxed directory. Do not allow users to specify arbitrary paths, especially those containing directory traversal sequences (e.g., `../`). Validate and sanitize any user-provided filename to ensure it does not contain path separators or special characters. | LLM | SKILL.md:70 | |
| MEDIUM | Unpinned external dependency @swiftlysingh/excalidraw-cli The skill instructs the LLM to install `@swiftlysingh/excalidraw-cli` globally using `npm install -g` or execute it via `npx` without specifying a version. This means the latest available version will be used. This introduces a supply chain risk, as a future malicious or vulnerable version of the package could be published and automatically installed, compromising the agent's environment. Specify a fixed, known-good version for the `@swiftlysingh/excalidraw-cli` package (e.g., `npm install -g @swiftlysingh/excalidraw-cli@1.1.0`). Regularly review and update the pinned version after security vetting. For `npx`, use `npx @swiftlysingh/excalidraw-cli@1.1.0`. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/3bfe2cf372d4e8dc)
Powered by SkillShield