ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. Andy Lau’s face doesn’t need any fix (Did he??). 0, 10 steps on the base sdxl model, and steps 10-20 on the sdxl refiner. This SDXL ComfyUI workflow has many versions including LORA support, Face Fix, etc. Usually, on the first run (just after the model was loaded) the refiner takes 1. The two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. 0: An improved version over SDXL-refiner-0. Second KSampler must not add noise, do. I don't want it to get to the point where people are just making models that are designed around looking good at displaying faces. i'm probably messing something up im still new to this but you put the model and clip output nodes of the checkpoint loader to the. ComfyUI and SDXL. 23:48 How to learn more about how to use ComfyUI. will output this resolution to the bus. 9vae Refiner checkpoint: sd_xl_refiner_1. In this ComfyUI Tutorial we'll install ComfyUI and show you how it works. Place VAEs in the folder ComfyUI/models/vae. That’s because the creator of this workflow has the same 4GB. Outputs will not be saved. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. 这一期呢我们来开个新坑,来讲sd的另一种打开方式,也就是这个节点化comfyUI。那熟悉我们频道的老观众都知道,我一直是用webUI去做演示和讲解的. Once wired up, you can enter your wildcard text. Per the announcement, SDXL 1. Part 4 (this post) - We will install custom nodes and build out workflows. Inpainting a woman with the v2 inpainting model: . Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. 75 before the refiner ksampler. A hub dedicated to development and upkeep of the Sytan SDXL workflow for ComfyUI he workflow is provided as a . The CLIP Text Encode SDXL (Advanced) node provides the same settings as its non SDXL version. Using the SDXL Refiner in AUTOMATIC1111. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. best settings for Stable Diffusion XL 0. 1. Hires fix is just creating an image at a lower resolution, upscaling it and then sending it through img2img. 5. json: sdxl_v1. It provides a super convenient UI and smart features like saving workflow metadata in the resulting PNG images. Does it mean 8G VRAM is too little in A1111? Anybody able to run SDXL on 8G VRAM GPU in A1111 at. 25:01 How to install and use ComfyUI on a free. safetensors + sd_xl_refiner_0. 4. Note that in ComfyUI txt2img and img2img are the same node. 9 and Stable Diffusion 1. 0. Lecture 18: How Use Stable Diffusion, SDXL, ControlNet, LoRAs For FREE Without A GPU On Kaggle Like Google Colab. There are settings and scenarios that take masses of manual clicking in an. Put into ComfyUImodelsvaeSDXL and ComfyUImodelsvaeSD15). The idea is you are using the model at the resolution it was trained. SDXL 1. The workflow should generate images first with the base and then pass them to the refiner for further. x for ComfyUI; Table of Content; Version 4. SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. 1 and 0. launch as usual and wait for it to install updates. x for ComfyUI; Table of Content; Version 4. It didn't work out. Is there an explanation for how to use the refiner in ComfyUI? You can just use someone elses workflow of 0. best settings for Stable Diffusion XL 0. Stability. com is the number one paste tool since 2002. Click “Manager” in comfyUI, then ‘Install missing custom nodes’. However, with the new custom node, I've. 你可以在google colab. I think this is the best balanced I could find. 9_comfyui_colab (1024x1024 model) please use with: refiner_v0. safetensors. SDXL Models 1. Automate any workflow Packages. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. . 9. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. Upscale the. Play around with different Samplers and different amount of base Steps (30, 60, 90, maybe even higher). After gathering some more knowledge about SDXL and ComfyUI, and experimenting few days with both I've ended up with this basic (no upscaling) 2-stage (base + refiner) workflow: It works pretty well for me - I change dimensions, prompts, and samplers parameters, but the flow itself stays as it is. How To Use Stable Diffusion XL 1. UPD: Version 1. 9 - How to use SDXL 0. Colab Notebook ⚡. 0 or higher. What I am trying to say is do you have enough system RAM. Custom nodes pack for ComfyUI This custom node helps to conveniently enhance images through Detector, Detailer, Upscaler, Pipe, and more. SDXL two staged denoising workflow. Part 3 - we added the refiner for the full SDXL process. Hires isn't a refiner stage. Yes only the refiner has aesthetic score cond. There are several options on how you can use SDXL model: How to install SDXL 1. In this series, we will start from scratch - an empty canvas of ComfyUI and, step by step, build up SDXL workflows. 8s (create model: 0. But these improvements do come at a cost; SDXL 1. ComfyUI Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPodSDXL-ComfyUI-Colab One click setup comfyUI colab notebook for running SDXL (base+refiner). 99 in the “Parameters” section. Automatic1111 tested and verified to be working amazing with. Install SDXL (directory: models/checkpoints) Install a custom SD 1. It provides workflow for SDXL (base + refiner). Install this, restart ComfyUI and click “manager” then “install missing custom nodes” restart again and it should work. SDXL VAE. It fully supports the latest. Note that for Invoke AI this step may not be required, as it’s supposed to do the whole process in a single image generation. 详解SDXL ComfyUI稳定工作流程:我在Stability使用的AI艺术内部工具接下来,我们需要加载我们的SDXL基础模型(改个颜色)。一旦我们的基础模型加载完毕,我们还需要加载一个refiner,但是我们会稍后处理这个问题,不用着急。此外,我们还需要对从SDXL输出的clip进行一些处理。The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. Two Samplers (base and refiner), and two Save Image Nodes (one for base and one for refiner). x and offers many optimizations, such as re-executing only parts of the workflow that change between executions. ComfyUI is a powerful and modular GUI for Stable Diffusion, allowing users to create advanced workflows using a node/graph interface. Is the best balanced I could find between image size (1024x720), models, steps (10+5 refiner), samplers/schedulers, so we can use SDXL on our laptops without those expensive/bulky desktop GPUs. This workflow is meticulously fine tuned to accommodate LORA and Controlnet inputs, and demonstrates interactions with embeddings as well. 0 with both the base and refiner checkpoints. 5s, apply weights to model: 2. SDXL1. Just training the base model isn't feasible for accurately generating images of subjects such as people, animals, etc. FromDetailer (SDXL/pipe), BasicPipe -> DetailerPipe (SDXL), Edit DetailerPipe (SDXL) - These are pipe functions used in Detailer for utilizing the refiner model of SDXL. 9 comfyui (i would prefere to use a1111) i'm running a rtx 2060 6gb vram laptop and it takes about 6-8m for a 1080x1080 image with 20 base steps & 15 refiner steps edit: im using Olivio's first set up(no upscaler) edit: after the first run i get a 1080x1080 image (including the refining) in Prompt executed in 240. NOTE: You will need to use linear (AnimateDiff-SDXL) beta_schedule. In this post, I will describe the base installation and all the optional assets I use. 0 SDXL-refiner-1. 0 mixture-of-experts pipeline includes both a base model and a refinement model. You can use the base model by it's self but for additional detail you should move to. Updated with 1. Use SDXL Refiner with old models. Having previously covered how to use SDXL with StableDiffusionWebUI and ComfyUI, let’s now explore SDXL 1. 0. Make the following changes: In the Stable Diffusion checkpoint dropdown, select the refiner sd_xl_refiner_1. stable diffusion SDXL 1. 5 from here. that should stop it being distorted, you can also switch the upscale method to bilinear as that may work a bit better. SEGSPaste - Pastes the results of SEGS onto the original. In the case you want to generate an image in 30 steps. I also automated the split of the diffusion steps between the Base and the. 5 method. The speed of image generation is about 10 s/it (10241024 batch size 1), refiner works faster up to 1+ s/it when refining at the same 10241024 resolution. It is not AnimateDiff but a different structure entirely, however Kosinkadink who makes the AnimateDiff ComfyUI nodes got it working and I worked with one of the creators to figure out the right settings to get it to give good outputs. In researching InPainting using SDXL 1. +Use SDXL Refiner as Img2Img and feed your pictures. go to img2img, choose batch, dropdown refiner, use the folder in 1 as input and the folder in 2 as output. Fooocus-MRE v2. fix will act as a refiner that will still use the Lora. Nevertheless, its default settings are comparable to. If you look for the missing model you need and download it from there it’ll automatically put. Given the imminent release of SDXL 1. at least 8GB VRAM is recommended. Yesterday I woke up to this Reddit post "Happy Reddit Leak day" by the Joe Penna. jsonを使わせていただく。. #ComfyUI is a node based powerful and modular Stable Diffusion GUI and backend. Couple of notes about using SDXL with A1111. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. thanks to SDXL, not the usual ultra complicated v1. 9vae Image size: 1344x768px Sampler: DPM++ 2s Ancestral Scheduler: Karras Steps: 70 CFG Scale: 10 Aesthetic Score: 6Config file for ComfyUI to test SDXL 0. SDXL base → SDXL refiner → HiResFix/Img2Img (using Juggernaut as the model, 0. Creating Striking Images on. 4/1. I also used a latent upscale stage with 1. 5 works with 4GB even on A1111 so you either don't know how to work with ComfyUI or you have not tried it at all. 0 Base Lora + Refiner Workflow. SDXL Refiner: The refiner model, a new feature of SDXL; SDXL VAE: Optional as there is a VAE baked into the base and refiner model, but nice to have is separate in the workflow so it can be updated/changed without needing a new model. 7 contributors. In this quick episode we do a simple workflow where we upload an image into our SDXL graph inside of ComfyUI and add additional noise to produce an altered i. Yes, there would need to be separate LoRAs trained for the base and refiner models. 1. sd_xl_refiner_0. 1 Base and Refiner Models to the ComfyUI file. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. Tutorial Video : ComfyUI Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab. ComfyUI Manager: Plugin for CompfyUI that helps detect and install missing plugins. Traditionally, working with SDXL required the use of two separate ksamplers—one for the base model and another for the refiner model. What I have done is recreate the parts for one specific area. just using SDXL base to run a 10 step dimm ksampler then converting to image and running it on 1. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. That is not the ideal way to run it. Example workflow can be loaded downloading the image and drag-drop on comfyUI home page. Yet another week and new tools have come out so one must play and experiment with them. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. 5 + SDXL Refiner Workflow : StableDiffusion Continuing with the car analogy, Learning ComfyUI is a bit like learning to driving with manual shift. With SDXL I often have most accurate results with ancestral samplers. please do not use the refiner as an img2img pass on top of the base. refiner_output_01030_. Is the best balanced I could find between image size (1024x720), models, steps (10+5 refiner), samplers/schedulers, so we can use SDXL on our laptops without those expensive/bulky desktop GPUs. Source. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. webui gradio sd stable-diffusion stablediffusion stable-diffusion-webui sdxl Updated Oct 28 , 2023. เครื่องมือนี้ทรงพลังมากและ. ComfyUI a model "Queue prompt"をクリック。. Refiner same folder as Base model, although with refiner i can't go higher then 1024x1024 in img2img. The SDXL 1. 0 (26 July 2023)! Time to test it out using a no-code GUI called ComfyUI!. Natural langauge prompts. At that time I was half aware of the first you mentioned. 9 and sd_xl_refiner_0. 9 model images consistent with the official approach (to the best of our knowledge) Ultimate SD Upscaling. ai has released Stable Diffusion XL (SDXL) 1. 1s, load VAE: 0. ComfyUIでSDXLを動かす方法まとめ. You must have sdxl base and sdxl refiner. RunDiffusion. Final Version 3. But actually I didn’t heart anything about the training of the refiner. 手順1:ComfyUIをインストールする. download the SDXL models. A CLIPTextEncodeSDXLRefiner and a CLIPTextEncode for the refiner_positive and refiner_negative prompts respectively. Feel free to modify it further if you know how to do it. Compatible with: StableSwarmUI * developed by stability-ai uses ComfyUI as backend, but in early alpha stage. It's a LoRA for noise offset, not quite contrast. Software. 78. json file which is easily loadable into the ComfyUI environment. 0 is “built on an innovative new architecture composed of a 3. ComfyUI * recommended by stability-ai, highly customizable UI with custom workflows. Now that you have been lured into the trap by the synthography on the cover, welcome to my alchemy workshop! 现在,你已经被封面上的合成图所吸引. o base+refiner model) Usage. 1. An SDXL base model in the upper Load Checkpoint node. This stable. I can tell you that ComfyUI renders 1024x1024 in SDXL at faster speeds than A1111 does with hiresfix 2x (for SD 1. 9-usage This repo is a tutorial intended to help beginners use the new released model, stable-diffusion-xl-0. Updated with 1. 1 Workflow - Complejo - for Base+Refiner and Upscaling; 1. you can use SDNext and set the diffusers to use sequential CPU offloading, it loads the part of the model its using while it generates the image, because of that you only end up using around 1-2GB of vram. json. There are two ways to use the refiner: ; use the base and refiner models together to produce a refined image ;Got playing with SDXL and wow! It's as good as they stay. 2 Workflow - Simple - Easy to use and simple with Upscaling 4K, just. ( I am unable to upload the full-sized image. If you have the SDXL 1. Upscale model, (needs to be downloaded into ComfyUImodelsupscale_models Recommended one is 4x-UltraSharp, download from here. Klash_Brandy_Koot. 0s, apply half (): 2. This is great, now all we need is an equivalent for when one wants to switch to another model with no refiner. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try to add. 9 VAE; LoRAs. Supports SDXL and SDXL Refiner. I hope someone finds it useful. Now that you have been lured into the trap by the synthography on the cover, welcome to my alchemy workshop! 现在,你已经被封面上的合成图所吸引. Part 3 - we will add an SDXL refiner for the full SDXL process. SDXL Offset Noise LoRA; Upscaler. Here's what I've found: When I pair the SDXL base with my LoRA on ComfyUI, things seem to click and work pretty well. 5 + SDXL Base shows already good results. July 14. The field of artificial intelligence has witnessed remarkable advancements in recent years, and one area that continues to impress is text-to-image. The refiner is trained specifically to do the last 20% of the timesteps so the idea was to not waste time by. SD1. 0 Refiner & The Other SDXL Fp16 Baked VAE. I created this comfyUI workflow to use the new SDXL Refiner with old models: Basically it just creates a 512x512 as usual, then upscales it, then feeds it to the refiner. SDXL_LoRA_InPAINT | SDXL_With_LoRA | SDXL_Inpaint | SDXL_Refiner_Inpaint . best settings for Stable Diffusion XL 0. I tried Fooocus yesterday and I was getting 42+ seconds for a 'quick' generation (30 steps). could you kindly give me. I'm not trying to mix models (yet) apart from sd_xl_base and sd_xl_refiner latents. SDXL places very heavy emphasis at the beginning of the prompt, so put your main keywords. Reply. 9 VAE; LoRAs. x during sample execution, and reporting appropriate errors. An SDXL refiner model in the lower Load Checkpoint node. latent to avoid this) Do the opposite and disable the nodes for the base model and enable the refiner model nodes. I've successfully downloaded the 2 main files. 如下图refiner model生成的图片质量和细节捕捉要好过base model生成的图片,没有对比就没有伤害! . 🧨 Diffusers Examples. 0. Hi there. 0. 17:38 How to use inpainting with SDXL with ComfyUI. 1 Click Auto Installer Script For ComfyUI (latest) & Manager On RunPod. r/linuxquestions. 0_controlnet_comfyui_colab (1024x1024 model) controlnet_v1. 5. Regenerate faces. Place upscalers in the folder ComfyUI. 0 involves an impressive 3. 3) Not at the moment I believe. それ以外. Restart ComfyUI. A detailed description can be found on the project repository site, here: Github Link. I'm creating some cool images with some SD1. 5 base model vs later iterations. Contribute to markemicek/ComfyUI-SDXL-Workflow development by creating an account on GitHub. Be patient, as the initial run may take a bit of. 0 base checkpoint; SDXL 1. VRAM settings. Together, we will build up knowledge,. VAE selector, (needs a VAE file, download SDXL BF16 VAE from here, and VAE file for SD 1. When all you need to use this is the files full of encoded text, it's easy to leak. Base SDXL model will stop at around 80% of completion (Use. Upto 70% speed. Control-Lora: Official release of a ControlNet style models along with a few other interesting ones. 0 refiner checkpoint; VAE. 0 workflow. 0. But the clip refiner is built in for retouches which I didn't need since I was too flabbergasted with the results SDXL 0. Installing. In this ComfyUI tutorial we will quickly c. I also used the refiner model for all the tests even though some SDXL models don’t require a refiner. Second, If you are planning to run the SDXL refiner as well, make sure you install this extension. It will only make bad hands worse. About Different Versions:-Original SDXL - Works as intended, correct CLIP modules with different prompt boxes. In ComfyUI this can be accomplished with the output of one KSampler node (using SDXL base) leading directly into the input of another KSampler node (using. WAS Node Suite. 9 and Stable Diffusion 1. Setup a quick workflow to do the first part of the denoising process on the base model but instead of finishing it stop early and pass the noisy result on to the refiner to finish the process. Having issues with refiner in ComfyUI. To use the refiner model: Navigate to the image-to-image tab within AUTOMATIC1111 or. x for ComfyUI ; Table of Content ; Version 4. 6B parameter refiner. It detects hands and improves what is already there. An SDXL base model in the upper Load Checkpoint node. 下载此workflow的json文件并把他Load加载到comfyUI里,即可以开始你的sdxl模型的comfyUI作图之旅了。 . Here are the configuration settings for the SDXL models test: I've been having a blast experimenting with SDXL lately. Updated ComfyUI Workflow: SDXL (Base+Refiner) + XY Plot + Control-LoRAs + ReVision + ControlNet XL OpenPose + Upscaler . 0, I started to get curious and followed guides using ComfyUI, SDXL 0. 5 models in ComfyUI but they're 512x768 and as such too small resolution for my uses. 2. ComfyUI插件使用. . ·. So if ComfyUI / A1111 sd-webui can't read the image metadata, open the last image in a text editor to read the details. The workflow should generate images first with the base and then pass them to the refiner for further. Testing was done with that 1/5 of total steps being used in the upscaling. A (simple) function to print in the terminal the. 5 refined model) and a switchable face detailer. Second picture is base SDXL, then SDXL + Refiner 5 Steps, then 10 Steps and 20 Steps. json and add to ComfyUI/web folder. My research organization received access to SDXL. 0. They compare the results of Automatic1111 web UI and ComfyUI for SDXL, highlighting the benefits of the former. Therefore, it generates thumbnails by decoding them using the SD1. 9. I've been having a blast experimenting with SDXL lately. I tried using the default. Images. On the ComfyUI Github find the SDXL examples and download the image (s). — NOTICE: All experimental/temporary nodes are in blue. I trained a LoRA model of myself using the SDXL 1. Here are some examples I did generate using comfyUI + SDXL 1. 5GB vram and swapping refiner too , use --medvram-sdxl flag when startingSDXL Prompt Styler Advanced: New node for more elaborate workflows with linguistic and supportive terms. tool guide. Part 2 ( link )- we added SDXL-specific conditioning implementation + tested the impact of conditioning parameters on the generated images. ago. Step 2: Install or update ControlNet. 5B parameter base model and a 6. 5 models and I don't get good results with the upscalers either when using SD1. He linked to this post where We have SDXL Base + SD 1. Sample workflow for ComfyUI below - picking up pixels from SD 1. Misconfiguring nodes can lead to erroneous conclusions, and it's essential to understand the correct settings for a fair assessment. 5支. SDXL Refiner 1. download the SDXL VAE encoder. The question is: How can this style be specified when using ComfyUI (e. This is an answer that someone corrects. Below the image, click on " Send to img2img ". . Now that you have been lured into the trap by the synthography on the cover, welcome to my alchemy workshop! 现在,你已经被封面上的合成图所吸引. ), you’ll need to activate the SDXL Refinar Extension. Use "Load" button on Menu. This checkpoint recommends a VAE, download and place it in the VAE folder. SDXL 1. It is highly recommended to use a 2x upscaler in the Refiner stage, as 4x will slow the refiner to a crawl on most systems, for no significant benefit (in my opinion). 0; Width: 896; Height: 1152; CFG Scale: 7; Steps: 30; Sampler: DPM++ 2M Karras; Prompt: As above. I wanted to see the difference with those along with the refiner pipeline added. 1: Support for Fine-Tuned SDXL models that don’t require the Refiner. Part 4 - we intend to add Controlnets, upscaling, LORAs, and other custom additions. 24:47 Where is the ComfyUI support channel. . So if ComfyUI / A1111 sd-webui can't read the image metadata, open the last image in a text editor to read the details. 5 model, and the SDXL refiner model. 0は、標準で1024×1024ピクセルの画像を生成可能です。 既存のモデルより、光源と影の処理などが改善しており、手や画像中の文字の表現、3次元的な奥行きのある構図などの画像生成aiが苦手とする画像も上手く生成できます。 Refiners should have at most half the steps that the generation has. 5 and 2. 0 links. 9. ) [Port 6006]. If you want it for a specific workflow you can copy it from the prompt section # of the image metadata of images generated with ComfyUI # keep in mind ComfyUI is pre alpha software so this format will change a bit. SDXLの特徴の一つっぽいrefinerを使うには、それを使うようなフローを作る必要がある。. I'm not having sucess to work with a mutilora loader within a workflow that envolves the refiner, because the multi lora loaders I've tried are not suitable to SDXL checkpoint loaders, AFAIK. 🚀LCM update brings SDXL and SSD-1B to the game 🎮photo of a male warrior, modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, medieval armor, professional majestic oil painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High. 0 with both the base and refiner checkpoints. json: sdxl_v0. in subpack_nodes. The generation times quoted are for the total batch of 4 images at 1024x1024. 9 in ComfyUI, with both the base and refiner models together to achieve a magnificent quality of image generation. Navigate to your installation folder. With ComfyUI it took 12sec and 1mn30sec respectively without any optimization. For my SDXL model comparison test, I used the same configuration with the same prompts. You want to use Stable Diffusion, use image generative AI models for free, but you can't pay online services or you don't have a strong computer. 9. Installing ControlNet for Stable Diffusion XL on Google Colab. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try.