Comfyui upscale rugs reddit
$
Comfyui upscale rugs reddit. This. It depends on how large the face in your original composition is. It uses CN tile with ult SD upscale. I have a custom image resizer that ensures the input image matches the output dimensions. For some context, I am trying to upscale images of an anime village, something like Ghibli style. The final steps are as follows: Apply inpaint mask run thought ksampler take latent output and send to latent upscaler (doing a 1. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. I liked the ability in MJ, to choose an image from the batch and upscale just that image. Here is a workflow that I use currently with Ultimate SD Upscale. Hello, It’s always nice to have new tips being shared and thanks for that but from what I see I think you still need to work on your workflow. I did once get some noise I didn't like, but rebooted & all was good second try. Latent upscale is different from pixel upscale. Belittling their efforts will get you banned. Subsequently, I'd cherry-pick the best one and employ the Ultimate SD upscale for a 2x upscale. 9, end_percent 0. Look at this workflow : This is a community to share and discuss 3D photogrammetry modeling. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. 5 to get a 1024x1024 final image (512 *4*0. Latent quality is better but the final image deviates significantly from the initial generation. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. The downside is that it takes a very long time. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. Generates a SD1. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird That said, Upscayl is SIGNIFICANTLY faster for me. Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. Thanks In A1111, I employed a resolution of 1280x1920 (with HiRes fix), generating 10-20 images per prompt. com/search/models?baseModel=SDXL%201. These comparisons are done using ComfyUI with default node settings and fixed seeds. Thanks for all your comments. I only have 4GB VRAM, so haven't gotten SUPIR working on my local system. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. No attempts to fix jpg artifacts, etc. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. SD upscaler and upscale from that. 5, photon v1. At the end, when you open and zoom on your image, it’s quite noticeable that your upscale generated visible seams between the upscales tiles. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. But I probably wouldn't upscale by 4x at all if fidelity is important. 2 options here. In A1111, I employed a resolution of 1280x1920 (with HiRes fix), generating 10-20 images per prompt. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. Welcome to the unofficial ComfyUI subreddit. g. Fastest would be a simple pixel upscale with lanczos. But it's weird. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. It will replicate the image's workflow and seed. simply add LORAs into your workflow: https://civitai. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. Please keep posted images SFW. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. So I made a upscale test workflow that uses the exact same latent input and destination size. 2 So I made a upscale test workflow that uses the exact same latent input and destination size. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. That's because latent upscale turns the base image into noise (blur). That's practically instant but doesn't do much either. That's because of the model upscale. u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. 1-0. No matter what, UPSCAYL is a speed demon in comparison. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. Upscale and then fix will work better here. Really chaotic images or images that actually benefit from added details from the prompt can look exceptionally good at ~8. 0&modelType=LORA&sortBy=models_v8&query=details. A lot of people are just discovering this technology, and want to show off what they created. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. Instead, I use Tiled KSampler with 0. 5 noise Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. Already used tile controlnet, not sure what else to do. I solved that with using only 1 steps and adding multiple iterative upscale nodes. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. I too use SUPIR, but just to sharpen my images on the first pass. Reply reply Top 1% Rank by size Grab the image from your file folder, drag it onto the entire ComfyUI window. And above all, BE NICE. 5, euler, sgm_uniform or CNet strength 0. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. Please share your tips, tricks, and workflows for using this software to create your AI art. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). If it’s a close up then fix the face first. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. 25- 1. Does anyone have any suggestions, would it be better to do an ite u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. The workflow is kept very simple for this test; Load image Upscale Save image. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. One does an image upscale and the other a latent upscale. Now, transitioning to Comfy, my workflow continues at the 1280x1920 resolution. May 6, 2024 · Those detail loras are 100% compatible with comfyui, and yes, that's the first, second and third recommendation I would give. safetensors (SD 4X Upscale Model) Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. It's high quality, and easy to control the amount of detail added, using control scale and restore cfg, but it slows down at higher scales faster than ultimate SD upscale does. . I often reduce the size of the video and the frames per second to speed up the process. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). Both these are of similar speed. I've played around with different upscale models in both applications as well as settings. I needed a workflow to upscale and interpolate the frames to improve the quality of the video. articles on new photogrammetry software or techniques. There are also "face detailer" workflows for faces specifically. So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Thanks. It works more like DLSS, tile by tile and faster than iterative one. And at the end of it, I have a latent upscale step that I can't for the life of me figure out. 2 After 6 days of hard work (2 days build, 1 day testing, 2 day recording and 1 day editing and very little sleep, well, I finally managed to upload this! full tutorial in the youtube description (it's entirely free of course) - and the video goes into 1h of detailled instructions on how to build it yourself (because I prefer for someone to learn how to fish than to give them a fish 😂 I had the same problem and those steps tanks performances as well. Thank Aug 31, 2024 · What is the main focus of the 'ComfyUI: Flux with LLM, 5x Upscale (Workflow Tutorial)' video?-The main focus of the video is to provide a tutorial on how to use ComfyUI with Flux, a large language model (LLM), to upscale images up to 5x their original resolution using a custom workflow. Upscale x1. 6 denoise and either: Cnet strength 0. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. I generate an image that I like then mute the first ksampler, unmute Ult. 5 if you want to divide by 2) after upscaling by a model. There is a face detailer node. 0. ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. And when purely upscaling, the best upscaler is called LDSR. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. 5 upscale) upscaler to ksampler running 20-30 steps at . 5=1024). Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. However, I switched to Ultimate SD Upscale custom node. 2 and resampling faces 0. I created this workflow to do just that. Hope someone can advise. Also, both have a denoise value that drastically changes the result. After borrowing many ideas, and learning ComfyUI. this is just a simple node build off what's given and some of the newer nodes that have come out. I then use a tiled controlnet and use Ultimate Upscale to upscale by 3-4x resulting in up to 6Kx6K images that are quite crisp. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. 5 "Upscaling with model" and then denoising 0. Then comes the higher resolution by upscaling. A step-by-step guide to mastering image quality. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. 5 denoise. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. Also ultimate sd upscale is also a node if you dont have enough vram it tiles the image so that you dont run out of memory. This is done after the refined image is upscaled and encoded into a latent. Jan 5, 2024 · I have been experimenting with AI videos lately. - latent upscale looks much more detailed, but gets rid of the detail of the original image. If it’s a distant face then you probably don’t have enough pixel area to do the fix justice. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. 75 denoise with ultimate sd upscale is great but how do I get rid of the sky mountains? SD1. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with You just have to use the node "upscale by" using bicubic method and a fractional value (0. 5 noise I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. It's why you need at least 0. Try immediately VAEDecode after latent upscale to see what I mean. - image upscale is less detailed, but more faithful to the image you upscale. The resolution is okay, but if possible I would like to get something better. This will allow detail to be built in during the upscale. 9 , euler Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. You guys have been very supportive, so I'm posting here first. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. second pic. tob fktht snzn qzxoc kszuv acnyc zjl rpykyz hvbyqvm darnkkm