What Is ComfyUI?
ComfyUI is a node-based interface for Stable Diffusion image generation. Unlike AUTOMATIC1111 WebUI, it allows you to visually design each processing step on a graph. Although this approach may seem complex at first, it allows you to create much more flexible and powerful workflows in the long run.
ComfyUI advantages: - **Visual workflow design:** You control each step visually - **Low memory usage:** You can perform the same operations with less VRAM than A1111 - **Reusable workflows:** You can save and share them as JSON - **Modular structure:** You can add and remove nodes as desired - **Speed:** Repetitive operations are very fast thanks to the caching system
Installation
The easiest way to install ComfyUI:
**For Windows:** 1. Download the ComfyUI portable version from GitHub (7z archive) 2. Extract the archive 3. Run "run_nvidia_gpu.bat" 4. Open http://127.0.0.1:8188 in your browser
**For Mac and Linux:** 1. Python 3.10+ and Git must be installed 2. Clone the repo: git clone https://github.com/comfyanonymous/ComfyUI 3. Create a virtual environment and install dependencies 4. Start with "python main.py"
Place model files in the "models/checkpoints" folder. Copy .safetensors files downloaded from Civitai or Hugging Face to this folder.
Basic Workflow Structure
Every ComfyUI workflow consists of these basic nodes:
1. **Load Checkpoint:** Selects the model to use (SD 1.5, SDXL, FLUX, etc.) 2. **CLIP Text Encode (Prompt):** Where you enter your positive prompt 3. **CLIP Text Encode (Negative):** Where you enter your negative prompt 4. **Empty Latent Image:** Sets the size of the image to be generated 5. **KSampler:** Performs the sampling operation (sampler, steps, CFG) 6. **VAE Decode:** Converts the latent image to pixel image 7. **Save Image:** Saves the result
You create a basic text-to-image workflow by connecting these nodes in sequence. Connections between nodes are shown with colored cables; the same color represents the same data type.
Adding and Connecting Nodes
Right-click on empty space to open the node menu. Under "Add Node," you can find all available nodes by category. Drag from a node's output point and drop it on another node's input point; the connection is created automatically.
Common additional nodes: - **LoRA Loader:** Loads LoRA models and adjusts weight - **ControlNet Apply:** Adds ControlNet control - **Image Scale:** Resizes the image - **Upscale Latent:** Upscales in latent space - **Image Composite:** Combines multiple images
LoRA Usage
LoRA (Low-Rank Adaptation) models add extra styles or concepts to the base model. To use LoRA in ComfyUI:
1. Place the LoRA file (.safetensors) in the "models/loras" folder 2. Add a "Load LoRA" node to your workflow 3. Connect this node between the Checkpoint Loader and KSampler 4. Select the LoRA model file 5. Adjust "strength_model" and "strength_clip" values (0.5-1.0 is good for starters)
To use multiple LoRAs, connect Load LoRA nodes in series. Keeping each LoRA's weight low (0.4-0.7) reduces conflicts.
Saving and Sharing Workflows
ComfyUI workflows are saved in JSON format. Every image you generate includes workflow information as metadata; you can load the same workflow by dragging the image into ComfyUI.
Workflow sharing platforms: - **OpenArt.ai:** Hosts thousands of ready-made ComfyUI workflows - **ComfyWorkflows.com:** Workflows shared by the community - **CivitAI:** Example workflows on model pages
Custom Nodes
ComfyUI's power comes from custom nodes developed by the community. You can access thousands of custom nodes by installing the ComfyUI Manager extension:
- **WAS Node Suite:** Advanced image processing nodes - **ComfyUI Impact Pack:** Detail enhancement and face fixing - **Efficiency Nodes:** Batch processing and optimization - **AnimateDiff:** Video generation workflows
To install Manager, clone the repo into ComfyUI's custom_nodes folder and restart.