DocsImage Generation: ComfyUI
ComfyUI – Installation & Usage Guide
This guide provides step-by-step instructions to install, configure, and use ComfyUI for AI image generation. ComfyUI is a node-based interface for Stable Diffusion, offering greater flexibility and control over the image generation process.
System Requirements
Recommended Hardware
- GPU: NVIDIA (8GB+ VRAM recommended), AMD (Linux), or Apple Silicon
- RAM: 16GB or more
- Disk Space: 20GB+ for installation and models
- Operating System: Windows 10/11, Linux, or macOS
Required Software
- Python 3.10–3.12 (3.12 recommended)
- Git (for repository-based installation)
Installation
Windows
Method 1: Portable Install (Easiest)
- Download the portable version of ComfyUI from the releases page.
- Extract the ZIP with 7-Zip.
- Navigate to the extracted folder.
- Run run_nvidia_gpu.bat (for NVIDIA GPUs) or run_cpu.bat (without a GPU).
Method 2: Install via Git
- Install Python 3.10–3.12 from the official website.
- Install Git from the official website.
- Open Command Prompt or PowerShell.
- Clone the repository:
git clone https://github.com/comfyanonymous/ComfyUI.git
5. Go to the ComfyUI folder:
cd ComfyUI
6. Start ComfyUI:
python main.py
Linux
- Install required dependencies:
sudo apt update sudo apt install python3 python3-pip python3-venv git
2. Clone the repository:
git clone https://github.com/comfyanonymous/ComfyUI.git
3. Go to the ComfyUI folder:
cd ComfyUI
4. (Optional) Create a virtual environment:
python3 -m venv venv source venv/bin/activate
5. Start ComfyUI:
python3 main.py
macOS
- Install Homebrew (if you don’t have it yet):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
2. Install Python and Git:
brew install python git
3. Clone the repository:
git clone https://github.com/comfyanonymous/ComfyUI.git
4. Go to the ComfyUI folder:
cd ComfyUI
5. Start ComfyUI:
python3 main.py
Configuring webui-user.bat
To enable the API and other necessary settings, you need to modify the webui-user.bat file (Windows) or create a startup script (Linux/macOS).
Windows
- Create or edit the webui-user.bat file in the ComfyUI root folder with the following content:
@echo off set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS=--api --cors-allow-origins=* --listen --server-name 0.0.0.0 --port 8188 --disable-safe-unpickle call main.py
- Save the file and use it to start ComfyUI instead of the default file.
Linux/macOS
- Create a start_comfyui.sh file in the ComfyUI root folder:
#!/bin/bash python3 main.py --api --cors-allow-origins=* --listen --server-name 0.0.0.0 --port 8188 --disable-safe-unpickle
- Make the file executable:
chmod +x start_comfyui.sh
- Run the script:
./start_comfyui.sh
Downloading Models
ComfyUI does not ship with pre-installed models. You need to download models separately.
Quick Start - Essential Model Download
⚠️ REQUIRED: For image generation, you need at least one checkpoint model.
Recommended starter model (Choose one):
- Stable Diffusion 1.5 (Smaller, faster):
- Download: v1-5-pruned-emaonly.safetensors
- Size: ~4GB
- Place in: models/checkpoints/
2. SDXL Base (Higher quality, requires more VRAM):
- Download any SDXL model from Civitai
- Size: ~6-7GB
- Place in: models/checkpoints/
Method 1: Manual Download
- Create the following folders in the ComfyUI root (if they don't already exist):
- models/checkpoints – For main models (SD 1.5, SDXL, etc.)
- models/vae – For VAE models
- models/controlnet – For ControlNet models
- models/loras – For LoRA models
- Download models from sites such as:
- Place the downloaded files (.safetensors or .ckpt) into the corresponding folders.
Method 2: ComfyUI Model Manager
- Start ComfyUI.
- Open the web interface (usually http://127.0.0.1:8188).
- Click the menu button (three horizontal lines) in the top-right corner.
- Select Manager from the menu.
- Use the manager to download models directly.
Running ComfyUI with the API Enabled
To use ComfyUI with your application, you must start it with the --api flag.
Windows
- Run the webui-user.bat file you configured earlier.
- Wait until the terminal shows:
- Running on local URL: http://127.0.0.1:8188
Linux/macOS
- Run the start_comfyui.sh script you created earlier:
- ./start_comfyui.sh
- Wait until the terminal shows:
- Running on local URL: http://127.0.0.1:8188
Checking if the API is Working
- Open your browser and go to:
- http://127.0.0.1:8188/object_info
- You should see a JSON response with information about the available nodes in ComfyUI.
Testing Basic Image Generation
- Open the ComfyUI Interface:
- Go to: http://127.0.0.1:8188
- You should see a node-based workflow interface
- Load Default Workflow (if needed):
- Click the "Load Default" button in the interface
- This creates a basic text-to-image generation workflow
- Test Image Generation:
- Find the "positive" text input node (usually labeled "CLIP Text Encode (Prompt)")
- Enter a simple prompt like: "a beautiful sunset over mountains"
- Click "Queue Prompt" button
- Wait for generation to complete (may take 30-60 seconds on first run)
- The generated image should appear in the interface
- Verify API Endpoints:
- Check queue status: http://127.0.0.1:8188/queue
- Check history: http://127.0.0.1:8188/history
- These should return JSON responses without errors
Integrating with Your Application
Once ComfyUI is running with the API enabled, your application can connect to it via http://127.0.0.1:8188.
Key Points
- ComfyUI must be running before your application tries to connect.
- The default port is 8188 (different from Forge/AUTOMATIC1111, which uses 7860).
- The API uses node-based JSON workflows.
- Your application should send requests to the /prompt endpoint.
Common Troubleshooting
ComfyUI doesn't start
- Ensure Python is installed correctly.
- Try running pip install -r requirements.txt in the ComfyUI folder.
- Check for errors in the console.
Error "CUDA out of memory"
- Reduce the image size (e.g., 512×512 instead of 1024×1024).
- Close other applications using the GPU.
- Add --lowvram to the startup flags.
API not responding
- Ensure ComfyUI was started with the --api flag.
- Confirm that port 8188 is not blocked by a firewall.
- Verify that ComfyUI is running and hasn't crashed.
Models don't appear
- Check that models are in the correct folders.
- Restart ComfyUI after adding new models.
- Verify that file formats are supported (.safetensors, .ckpt).
Integration-Specific Issues
Error: "No images found in ComfyUI output"
- Ensure at least one checkpoint model is installed in models/checkpoints/
- Verify the workflow completed successfully in the ComfyUI interface
- Check that the model is compatible with your hardware (VRAM requirements)
- Restart ComfyUI after installing new models
Error: "Maximum call stack size exceeded" or "FileReader is not defined"
- This indicates an integration issue with the calling application
- Ensure your application uses proper Node.js Buffer handling for image conversion
- The application should use Buffer.from() instead of browser-specific APIs
Generation takes too long or times out
- First image generation is slower due to model loading (can take 2-3 minutes)
- Subsequent generations should be faster (30-60 seconds)
- Check ComfyUI console for progress indicators
- For SDXL models, ensure you have 8GB+ VRAM
Workflow errors in ComfyUI interface
- Red nodes indicate errors in the workflow
- Common causes: Missing models, incompatible settings, insufficient VRAM
- Check the console output for specific error messages
- Try loading the default workflow: Menu → Load Default
Pre-Integration Checklist
Before integrating ComfyUI with your application, ensure:
- ComfyUI starts without errors
- At least one checkpoint model is installed in models/checkpoints/
- API endpoints respond correctly (/object_info, /queue, /history)
- Basic image generation works in the ComfyUI interface
- The --api flag is included in your startup command
- Port 8188 is accessible and not blocked by firewall
Important Notes
- ComfyUI runs locally on the user's machine, not on a remote server.
- Performance depends on the user's hardware, especially the GPU.
- Large models (like SDXL) require more VRAM (8GB+ recommended).
- The first run may be slower due to initial model loading.
- At least one checkpoint model is required for image generation to work.
- The default workflow uses a simple text-to-image setup that works with most models.
- Generated images are saved in the output folder within ComfyUI directory.
Additional Resources
This guide was created to help users set up and use ComfyUI for AI image generation. If you encounter issues or have questions, consult the official documentation or community resources.