Trust Assessment
comfy-cli received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Arbitrary Code Execution via Custom Node Installation, Arbitrary Code Execution via Untrusted Model or Pull Request Downloads, Exposure of Sensitive API Tokens.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 41/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Code Execution via Custom Node Installation The `comfy node install <name>` command allows installing arbitrary custom nodes. ComfyUI custom nodes are typically Python packages that can execute arbitrary code on the system. An attacker could trick the user or LLM into installing a malicious node, leading to full system compromise. This also applies to `comfy node install-deps --workflow workflow.json` where a malicious workflow could specify harmful dependencies, leading to arbitrary code execution during dependency installation. Implement a strict whitelist for allowed node names and dependency sources. Require explicit user confirmation for installing nodes or dependencies from untrusted or unknown sources. Consider sandboxing the installation process to limit potential damage. | LLM | SKILL.md:48 | |
| HIGH | Arbitrary Code Execution via Untrusted Model or Pull Request Downloads The skill allows downloading models from arbitrary URLs (`comfy model download --url <url>`) and installing ComfyUI directly from a GitHub Pull Request (`comfy install --pr 1234`). Machine learning models, especially those in formats like PyTorch's `.pt` or `.pth`, can contain pickled Python objects that, when deserialized, can execute arbitrary code (pickle attacks). Downloading from an untrusted URL introduces a significant risk of RCE or other malicious behavior. Installing from a specific Pull Request means executing code that may not have undergone full review or security checks, potentially leading to arbitrary code execution. Strongly advise against installing from untrusted PRs or downloading models from unverified URLs. Implement strict validation and scanning of downloaded model files for malicious content. Consider sandboxing model loading and installation processes to mitigate risks. | LLM | SKILL.md:29 | |
| MEDIUM | Exposure of Sensitive API Tokens The skill describes how `comfy-cli` handles sensitive API tokens (e.g., `civitai_api_token`, `hf_api_token`) via command-line flags (`--civitai-token`, `--hf-token`) or configuration files. While the skill itself doesn't exfiltrate these, passing tokens directly on the command line can expose them in process lists, shell history, or logs. Storing them in config files also poses a risk if the file permissions are not properly secured or if the system is compromised. This increases the attack surface for credential harvesting if the `comfy-cli` tool or the execution environment is compromised. Recommend using environment variables for sensitive tokens instead of command-line arguments or plain-text configuration files. Ensure configuration files storing tokens have restrictive file permissions. Implement warnings about the risks of exposing tokens. | LLM | SKILL.md:86 | |
| MEDIUM | Arbitrary Argument Passing to Underlying ComfyUI Application The `comfy launch -- --listen 0.0.0.0` command demonstrates the ability to pass arbitrary arguments directly to the underlying ComfyUI application. This mechanism could be exploited if ComfyUI has vulnerabilities that can be triggered by specific command-line arguments, potentially leading to configuration changes, data exposure, or even arbitrary code execution within the ComfyUI process. Restrict the arguments that can be passed through the `--` separator to a predefined whitelist of safe options. Implement input validation to prevent malicious argument injection. | LLM | SKILL.md:40 |
Scan History
Embed Code
[](https://skillshield.io/report/9364c4095d5a9c95)
Powered by SkillShield