Best upscale model for comfyui reddit
Best upscale model for comfyui reddit. Fastest would be a simple pixel upscale with lanczos. Search for upscale and click on Install for the models you want. 25 i get a good blending of the face without changing the image to much. Tried the llite custom nodes with lllite models and impressed. This ComfyUI nodes setup lets you use Ultimate SD Upscale custom nodes in your ComfyUI AI generation routine. 1 at main (huggingface. 0 and want to add an Aesthetic Score Predictor function. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. Edit: i changed models a couple of times, restarted comfy a couple of times… and it started working again… OP: So, this morning, when I left for… r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. fix. e. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample Welcome to the unofficial ComfyUI subreddit. 2 - image upscale is less detailed, but more faithful to the image you upscale. messing around with upscale by model is pointless for high res fix. I am curious both which nodes are the best for this, and which models. Import times for custom nodes: 0. I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. in a1111 the controlnet Welcome to the unofficial ComfyUI subreddit. Also, both have a denoise value that drastically changes the result. 0-RC , its taking only 7. Upscaling: Increasing the resolution and sharpness at the same time. Yep , people do say that ultimate SD works for SDXL as well now but didn't work for me. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. . Please share your tips, tricks, and workflows for using this software to create your AI art. true. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. But for the other stuff, super small models and good results. 0 seconds (IMPORT FAILED): R:\diffusion\ComfyUI\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale 0. so i. 34 per hour) and discovered this workflow by @plasm0 that runs locally and support upscaling as well. Generates a SD1. I haven't been able to replicate this in Comfy. Now go back to img2img generated mask the important parts of your images and upscale that. Aug 5, 2024 · Flux has been out of under a week and already seeing some great innovation in the open source community. I run some tests this morning. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. attach to it a "latent_image" in this case it's "upscale latent" Welcome to the unofficial ComfyUI subreddit. Best aesthetic scorer custom node suite for ComfyUI? I'm working on the upcoming AP Workflow 8. I was working on exploring and putting together my guide on running Flux on Runpod ($0. Best method to upscale faces after doing a faceswap with reactor It's a 128px model so the output faces after faceswapping is blurry and low res. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. g Use a X2 Upscaler model. 15-0. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). I first create the image with SDXL then ultimate upscale using a SD 1. All of this can be done in Comfy with a few nodes. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. 5 model) >> FaceDetailer. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. There is no tiling in the default A1111 hires. I want to upscale my image with a model, and then select the final size of it. This is the 'latent chooser' node - it works but is slightly unreliable. Sometimes models appear twice, for example “4xESRGAN” used by chaiNNer and “4x_ESRGAN” used by Automatic1111. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. So you workflow should look like this: KSampler (1) -> VAE Decode -> Upscale Image (using Model) -> Upscale Image By (to downscale the 4x result to desired size) -> VAE Encode -> KSampler (2) Welcome to the unofficial ComfyUI subreddit. That's practically instant but doesn't do much either. You create nodes and "wire" them together. Note: Remember to add your models, VAE, LoRAs etc. The world’s best aim trainer, trusted by top pros, streamers, and players like you. That's a a good model but to be very clear it's not "objectively better" than anything else on that site, OP's entire basis for the post is just wrong, purpose built upscale models are NOT "advancing" in the way they seem to believe. You can also run a regular AI upscale then a downscale (4x * 0. Welcome to the unofficial ComfyUI subreddit. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. the factor 2. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. Jan 13, 2024 · TLDR: Both seem to do better and worse in different parts of the image, so potentially combining the best of both (photoshop, seg/masking) can improve your upscales. 0-inpainting-0. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). Though, from what someone else stated it comes to use case. Then another node under loaders> "load upscale model" node. The custom node suites I found so far either lack the actual score calculator, don't support anything but CUDA, or have very basic rankers (unable to process a batch, for example, or only So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. with a denoise setting of 0. It's a lot faster that tiling but outputs aren't detailed. Id say it allows a very high level of access and customization, more thanA1111 - but with added complexity. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. 4 This custom node is failing to load but I think this is a separate issue. Ultimate sd upscale is the best for me, you can use it with controlnet tile in SD 1. The resolution is okay, but if possible I would like to get something better. There are also "face detailer" workflows for faces specifically. The downside is that it takes a very long time. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. Reply reply Welcome to the unofficial ComfyUI subreddit. FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. 6. That's because of the model upscale. 101 votes, 27 comments. 5), with an ESRGAN model. DirectML (AMD Cards on Windows) pip install torch-directml Then you can launch ComfyUI with: python main. There's "latent upscale by", but I don't want to upscale the latent image. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. co) Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Upgrade your FPS skills with over 25,000 player-created scenarios, infinite customization, cloned game physics, coaching playlists, and guided training and analysis. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. So from VAE Decode you need a "Uplscale Image (using model)" under loaders. We would like to show you a description here but the site won’t allow us. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). 5 models such as dreamshaper or those which provide good details. Super late here but is this still the case? I've got CCSR & TTPlanet. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. And when purely upscaling, the best upscaler is called LDSR. Thanks. Usually I use two my wokrflows: For upscaling I mainly used the chaiNNer application with models from the Upscale Wiki Model Database but I also used the fast stable diffuison automatic1111 google colab and also the replicate website super resolution collection. Reactor has built in codeformer and GFPGAN, but all the advice I've read said to avoid them. Please keep posted images SFW. But I probably wouldn't upscale by 4x at all if fidelity is important. Does anyone have any suggestions, would it be better to do an ite From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. pth or 4x_foolhardy_Remacri. 5, now I use it only with SDXL (bigger tiles 1024x1024) and I do it multiple times with decreasing denoise and cfg. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. To get the absolute best upscales, requires a variety of techniques and often requires regional upscaling at some points. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from… hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. I took a 2-4 month hiatus, basically when the OG upscale checkpoints came out like SUPIR so I have no heckin' idea what is the go-to these days. Upscale x1. Good for depth, open pose so far so good. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. 5, see workflow for more info Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. - latent upscale looks much more detailed, but gets rid of the detail of the original image. You can easily utilize schemes below for your custom setups. model: base sd v1. py --directml In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. That's because latent upscale turns the base image into noise (blur). diffusers/stable-diffusion-xl-1. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. But basically txt2img, img2img, 4x upscale with a few different upscalers. Then output everything to Video Combine . Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. You could also try a standard checkpoint with say 13, and 30. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird ComfyUI uses a flowchart diagram model. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. tkgrjr hdlei vdmiwlo erdi uscox ruig xzdbrdw huz imwpemo tyimiw