Theta Health - Online Health Shop

Comfyui load workflow example reddit

Comfyui load workflow example reddit. I couldn't find the workflows to directly import into Comfy. K12sysadmin is open to view and closed to post. I might do an issue in ComfyUI about that. It covers the following topics: Introduction to Flux. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. It is not much an inconvenience when I'm at my main PC. To add content, your account must be vetted/verified. Upscaling ComfyUI workflow. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. 0 Base and Refiners models downloaded and saved in the right place, it should work out of the box. But let me know if you need help replicating some of the concepts in my process. Jul 6, 2024 · Download the first image on this page and drop it in ComfyUI to load the Hi-Res Fix workflow. For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. The workflow in the example is passed in via the script in inline string, but it's better (and more flexible) to have your python script load that from a file instead. Initial Input block - Welcome to the unofficial ComfyUI subreddit. If you have the SDXL 0. I see youtubers drag images into ComfyUI and they get a full workflow, but when I do it, I can't seem to load any workflows. 1 ComfyUI install guidance, workflow and example. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. This repo contains examples of what is achievable with ComfyUI. I'll do you one better, and send you a png you can directly load into Comfy. be/ppE1W0-LJas - the tutorial. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments One trick I learned yesterday that makes sharing workflows easier when those include pictures and videos: use the Load Video (Path) node, post your video source online (on imgur for example), and link to it via that node with a simple URL. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. 1; Overview of different versions of Flux. Welcome to the unofficial ComfyUI subreddit. 6 min read. Put the flux1-dev. Of course with so much power also comes a steep learning curve, but it is well worth it IMHO. I use a google colab VM to run Comfyui. It's simple and straight to the point. 168. 1 with ComfyUI Aug 2, 2024 · Flux Dev. If you have the SDXL 1. This guide is about how to setup ComfyUI on your Windows computer to run Flux. something of an advantage comfyUI has over other interfaces is that the user has full control over every step of the process which allows you to load and unload models, images and use stuff entirely in latent space if you want. Instead, I created a simplified 2048X2048 workflow. I tried to find either of those two examples, but I have so many damn images I couldn't find them. If anyone else is reading this and wanting the workflows, here's a few simple SDXL workflows, using the new OneButtonPrompt nodes, saving prompt to file (I don't guarantee tidiness): That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. 1. 0. Just load your image, and prompt and go. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation ComfyUI already has examples repo where you can instantly load all cool native workflows just by drag'n'dropping picture from that repo. But when I'm doing it from a work PC or a tablet it is an inconvenience to obtain my previous workflow. Flux Schnell is a distilled 4 step model. I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). So, I just made this workflow ComfyUI. Adding same JSONs to main repo would only add more hell to commits history and just unnecessary duplicate of already existing examples repo. Comfy Workflows Comfy Workflows. I even have a working sdxl example in raw python on the readme. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. Nobody needs all that, LOL. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. Just my two cents. Using the Comfy image saver node will add EXIF fields that can be read by IIB, so you can view the prompt for each image without needing to drag/drop every single one. I cant load workflows from the example images using a second computer. 9(just search in youtube sdxl 0. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. com/. You can find the Flux Dev diffusion model weights here. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. second pic. You can just use someone elses workflow of 0. Help, pls? comments sorted by Best Top New Controversial Q&A Add a Comment You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. . If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. Share, discover, & run thousands of ComfyUI workflows. Besides, by recording the precise "workflow" (= the collection of interconnected nodes), you even get reasonably good reproducibility, namely, if you load the workflow and change nothing (including the seed) you should get exactly the same result. I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. Here's a quick example where the lines from the scribble actually overlap with the pose. ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. So every time I reconnect I have to load a presaved workflow to continue where I started. 1:8188 but when i try to load a flow through one of the example images it just does nothing. 9 leaked repo, you can read the README. Study this workflow and notes to understand the basics of ComfyUI, SDXL, and Refiner workflow. Table of contents. ai/profile/neuralunk?sort=most_liked. Many of the workflow examples can be copied either visually or by downloading a shared file containing the workflow. Upcoming tutorial - SDXL Lora + using 1. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. It'll load a basic SDXL workflow that includes a bunch of notes explaining things. sft file in your: ComfyUI/models/unet/ folder. The image blank can be used to copy (clipspace) to both the load image nodes, then from there you just paint your masks, set your prompts (only the base negative prompt is used in this flow) and go. Breakdown of workflow content. problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. Hope you like some of them :) Flux. The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably Using the Comfy image saver node will add EXIF fields that can be read by IIB, so you can view the prompt for each image without needing to drag/drop every single one. WAS suite has some workflow stuff in its github links somewhere as well. I can load the comfyui through 192. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. K12sysadmin is for K12 techs. Here are approx. The EXIF data won't capture the entire workflow but to quickly see an overview of a generated image, this is the best you can currently get. You can see it's a bit chaotic in this case but it works. I actually just released an open source extension that will convert any native ComfyUI workflow into executable Python code that will run without the server. eh, if you build the right workflow, it will pop out 2k and 8k images without the need for alot of ram. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. Workflow. I recently switched from A1111 to ComfyUI to mess around AI generated image. They do overlap. You need to select the directory your frames are located in (ie. You can then load or drag the following image in ComfyUI to get the workflow: ComfyUI Examples. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Still working on the the whole thing but I got the idea down Yes, 8Gb card, ComfyUI workflow loads both SDXL base & refiner models, separate XL VAE, 3 XL LoRAs, plus Face Detailer and its sam model and bbox detector model, and Ultimate SD Upscale with its ESRGAN model and input from the same base SDXL model all work together. this is just a simple node build off what's given and some of the newer nodes that have come out. Its just not intended as an upscale from the resolution used in the base model stage. Img2Img ComfyUI workflow. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. Thank you u/AIrjen!Love the variant generator, super cool. Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current Welcome to the unofficial ComfyUI subreddit. pngs of metadata. 0 and upscalers I think it was 3DS Max. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. 1 or not. Really happy with how this is working. Please share your tips, tricks, and workflows for using this software to create your AI art. Somebody suggested that the previous version of this workflow was a bit too messy, so this is an attempt to address the issue while guaranteeing room for future growth (the different segments of the Bus can be moved horizontally and vertically to enlarge each section/function. This is a more complex example but also shows you the power of ComfyUI. Ending Workflow. Then restart ComfyUI. After studying the nodes and edges, you will know exactly what Hi-Res Fix is. 8 denoise won't have actually 20 steps but rather decrease that amount to 16. Create animations with AnimateDiff. You can encode then decode bck to a normal ksampler with an 1. 4 - The best workflow examples are through the github examples pages. or through searching reddit, the comfyUI manual needs updating imo. and if you copy it into comfyUI, it will output a text string which you can then plug into you 'Clip text encoder' node and it is then used as your SD prompt. Same workflow as the image I posted but with the first image being different. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. Please keep posted images SFW. This is done using WAS nodes. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. SDXL Default ComfyUI workflow. We would like to show you a description here but the site won’t allow us. I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? That's a bit presumptuous considering you don't know my requirements. Any ideas on this? Starting workflow. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. Also notice that you can download that image and drag'n'drop it to your comfyui to load that workflow and you can also drag'n'drop images to Load Image node to quicker load them And another general difference is that A1111 when you set 20 steps 0. This could lead users to increase pressure to developers. Merging 2 Images together. 1; Flux Hardware Requirements; How to install and use Flux. ControlNet Depth ComfyUI workflow. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You can then load or drag the following image in ComfyUI to get the workflow: Load Image Node. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. https://youtu. You should now be able to load the workflow, which is here. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. I had to place the image into a zip, because people have told me that Reddit strips . you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. 5 with lcm with 4 steps and 0. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. I can load workflows from the example images through localhost:8188, this seems to work fine. vbog uaw hjnowa yhuspo jvzq doyqldz pplmg nodmyg yvdy eon
Back to content