Comfyui load workflow tutorial reddit

Comfyui load workflow tutorial reddit. The ComfyUI workflow uses the latent upscaler (nearest/exact) set to 512x912 multiplied by 2 and it takes around 120-140 seconds per image at 30 steps with SDXL 0. the diagram doesn't load into comfyui so I can't test it out. I have a video and I want to run SD on each frame of that video. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. You can then load or drag the following image in ComfyUI to get the workflow: This guide is about how to setup ComfyUI on your Windows computer to run Flux. Dec 1, 2023 · If you've ever wanted to start creating your own Stable Diffusion workflows in ComfyU, then this is the video for you! Learning the basics is essential for any workflow creator, and I’ve Apr 30, 2024 · Follow this step-by-step guide to load, configure, and test LoRAs in ComfyUI, and unlock new creative possibilities for your projects. Try to install the reactor node directly via ComfyUI manager. Go to the comfyUI Manager, click install custom nodes, and search for reactor. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Initial Input block - Welcome to the unofficial ComfyUI subreddit. Please keep posted images SFW. 8>. That's a bit presumptuous considering you don't know my requirements. And above all, BE NICE. 5 workflow (dont download workflows from YouTube videos or Advanced stuff on here!!). I then downloaded a custom workflow from here and initiated installing it from within comfyui. Once installed, download the required files and add them to the appropriate folders. Ending Workflow. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Then add in the parts for a LoRA, a ControlNet, and an IPAdapter. Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current I see youtubers drag images into ComfyUI and they get a full workflow, but when I do it, I can't seem to load any workflows. At the same time, I scratch my head to know which HF models to download and where to place the 4 Stage models. There are lots of people who wants to turn their workflows to fully functioning apps and libraries like your will help that a lot. So for the first time you start the workflow, wait a while. I'm wondering if there is a good tutorial out there that starts at step 1 and sets everything up and explains the concepts (eg: what is a latent image, eg). https://youtu. The workflow in the example is passed in via the script in inline string, but it's better (and more flexible) to have your python script load that from a file instead. and yess, this is arcane as FK and I have no idea why some of the workflows are shared this way. Jul 6, 2024 · You can construct an image generation workflow by chaining different blocks (called nodes) together. An example of the images you can generate with this workflow: ComfyUI's API is enough for making simple apps, but hard to write by hand. For the checkpoint, I suggest one that can handle cartoons / manga fairly easily. Somebody suggested that the previous version of this workflow was a bit too messy, so this is an attempt to address the issue while guaranteeing room for future growth (the different segments of the Bus can be moved horizontally and vertically to enlarge each section/function. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. Upcoming tutorial - SDXL Lora + using 1. Lora usage is confusing in ComfyUI. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. 1. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current Welcome to the unofficial ComfyUI subreddit. Ideally nothing that's like "download this workflow and click 'install missing nodes' because that never actually works. ComfyUI basics tutorial. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Most Awaited Full Fine Tuning (with DreamBooth effect) Tutorial Generated Images - Full Workflow Shared In The Comments - NO Paywall This Time - Explained OneTrainer - Cumulative Experience of 16 Months Stable Diffusion Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Workflow. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Put the flux1-dev. The images look better than most 1. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. I have a wide range of tutorials with both basic and advanced workflows. If you see a few red boxes, be sure to read the Questions section on the page. Starting workflow. ) Welcome to the unofficial ComfyUI subreddit. Help, pls? comments sorted by Best Top New Controversial Q&A Add a Comment Hey all, another tutorial, hopefully this can help with anyone who has trouble dealing with all the noodly goodness of comfyUI, in it I show some good layout practices for comfyUI and show how modular systems can be built. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. It downloads the custom nodes and then gets to "downloading models & other files". Belittling their efforts will get you banned. Try inpaint Try outpaint Hmm low Quality, try lantent upscale with 2 ksamplers. Flux Schnell is a distilled 4 step model. . Initial Input block - will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. 1 or not. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. ComfyScript is simple to read and write and can run remotely. 9. You need to select the directory your frames are located in (ie. 4. I tried the load methods from Was-nodesuite-comfyUI and ComfyUI-N-Nodes on comfyUI, but they seem to load all of my images into RAM at once. Not only I was able to recover a 176x144 pixel 20 year old video with this, in addition it supports the brand new SD15 model to Modelscope nodes by exponentialML, an SDXL lightning upscaler (in addition to the AD LCM one), and a SUPIR second stage, for a total a gorgeous 4k native output from comfyUI! You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. This causes my steps to take up a lot of RAM, leading to killed RAM. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. be/ppE1W0-LJas - the tutorial. Breakdown of workflow content. 1 with ComfyUI. (I will be sorting out workflows for tutorials at a later date in the youtube description for each, many can be found in r/comfyui where I first posted most of these. 1, such as LoRA, ControlNet, etc. Is there a way to load each image in a video (or a batch) to save memory? My goal is that I start the ComfyUI workflow and the workflow loads the latest image in a given directory and works with it. 86s/it on a 4070 with the 25 frame model, 2. A lot of people are just discovering this technology, and want to show off what they created. INITIAL COMFYUI SETUP and BASIC WORKFLOW. 9 but it looks like I need to switch my upscaling method. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. Related resources for Flux. You can find the Flux Dev diffusion model weights here. Tutorials wise, there are a bunch of images that can be loaded as a workflow by comfyUI, you download the png and load it. Yesterday, was just playing around with Stable Cascade and made some movie poster to test the composition and letter writing. IF there is anything you would like me to cover for a comfyUI tutorial let me know. Of course, if it takes more than 5 minutes It is clear that there is a problem. ComfyUI-to-Python-Extension can be written by hand, but it's a bit cumbersome, can't take benefit of the cache, and can only be run locally. Try generating basic stuff with prompt, read about cfg, steps and noise. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. sft file in your: ComfyUI/models/unet/ folder. Load Image Node. Let me know if you are interested in collaboration Welcome to the unofficial ComfyUI subreddit. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. Looks awesome, currently i am creating a tutorial for converting comfyui workflows to a production-grade multiuser backend api. Start by loading up your standard workflow - checkpoint, ksampler, positive, negative prompt, etc. I load the models fine and connect the proper nodes, and they work, but I'm not sure how to use them properly to mimic other webuis behavior. Welcome to the unofficial ComfyUI subreddit. 75s/it with the 14 frame model. Overview of different versions of Flux. Aug 2, 2024 · Flux Dev. The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably Welcome to the unofficial ComfyUI subreddit. Heya, I've been working on a few tutorials for comfyUI over the past couple of weeks if you are new at comfyUI and want a good grounding in how to use comfyUI then this tutorial might help you out. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation Welcome to the unofficial ComfyUI subreddit. Follow basic comfyui tutorials on comfyui github, like basic SD1. a search of the subreddit Didn't turn up any answers to my question. Seems very hit and miss, most of what I'm getting look like 2d camera pans. 5 based models with greater detail in SDXL 0. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Welcome to the unofficial ComfyUI subreddit. All the adapters that loads images from directories that I found (Inspire Pack and WAS Node Suite) seem to sort the files by name and don't give me an option to sort them by anything else. Flux Hardware Requirements. Link to the workflows, prompts and tutorials : download them here. With a 3060 12 vram, it sometimes takes me up to 3 minutes to load sdxl, but once loaded, all other generations are faster because you don't need to load the checkpoint anymore. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. How to install and use Flux. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation And now for part two of my "not SORA" series. The generated workflows can also be used in the web UI. I teach you how to build workflows rather than Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this software to create your AI art. In Automatic1111, for example, you load and control the strength by simply typing something like this: <lora:Dragon_Ball_Backgrounds_XL:0. It covers the following topics: Introduction to Flux. airnhc njrlqh zwsjd dfmqmhm vsadv yxlng csqvjovv tverlw epaa vgxmwa