Best stable diffusion adetailer face reddit you want to generate high resolution face portraits), and codeformer changes the face too much, you can use upscaled+face restored half-body shots of the character to build a small dataset for lora training. Check out my original post where I added a new image with freckles. If you are generating an image with multiple people in the background, such as a fashion show scene, increase this to 8. SDXL 1. face_yolov8n. In this post, you will learn She has become the standard AI face at this point. 0 Art Medium Study - 200 mediums : StableDiffusion. tried making some realistic pictures, kept having problem with eyes Easy to fix with Adetailer though, either with the face model, eye model or both. Yup. Here are a few things you might want to try: If you get the image mostly right, use inpaint on the face with the SD 1. That way each time its called its different (if its called for each face individually this will work. Highly recommend this extension. Motion Bucket makes perfect sense and I'd like to isolate CFG_scale for now to determine the most consistent value. I've never used Forge, but I'm pretty sure it supports the Adetailer extension. It saves you time and is great for quickly fixing common issues like garbled faces. Regional prompter, ultimate sd upscale are also very good and useful Reply reply Vivarevo SDXL is the newer base model for stable diffusion; compared to the previous models it generates at a higher resolution and produces much less body-horror, and I find it seems to follow prompts a lot better and provide more consistency for the same prompt. the rest looks like some extra upscaling not sure you'd have to ask the creator what they did to get that resolution but a simple sd If you are using A1111 the easiest way is to install Adetailer extension- it will auto-inpaint features of the image (models for face, eyes, and hands) and you can set a separate prompt for each. 4 as it will keep the angle and some structure guidance from base image and it will look better and not sticker-ed on. - for the purpose of keeping likeness with trained faces while Yes, SDXL is capable of little details. A 0 won't change the image at all, and a 1 will replace it completely. detect the face (or hands, body) with the same process Adetailer does, then inpaint the face etc. And Also Bypass the AnimateDiff Loader model to Original Model loader in the To Basic Pipe node else It will give you Noise on the face (as AnimateDiff loader dont work on single image, you need 4 atleast maybe and facedetailer can handle only 1 ) Only Drawback is there will be no After upscaling, the character's face looked off, so I took my upscaled frames and processed them through IMG2IMG. 5 base res is closer to that than SDXL's is. Best Stable Diffusion Models of All Time SDXL: Best overall Stable Diffusion model, excellent in generating highly detailed, realistic images. And it seems the open-source release will A low tech solution is to use a full name in the prompt. I like any stable diffusion related project that's open source but InvokeAI seems to be disconnected from the community and how people are actually using SD. 5 text2img with ADetailer for the face with face_yolov8s. That seemed to give nicer details to face without resulting in those overexposure results of using Loras in Adetailer and regular Prompt with the same settings. You need to crop the image to just around the face in a bounding box and then upscale it to a higher resolution, perform your face swap, then scale it back down and paste it over the old face. ! Adetalier runs for sure on SD1. Limits your options on how much detail you can get to the face. I recently discovered this trick and it works great to improve quality and stability of faces in video, especially with smaller objects. It's going to paste whatever face it reconstructs over top after everything is done. Hi all, we're introducing Inference in v2. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The Face Restore feature in Stable Diffusion has never really been my cup of tea. You can have it do automatic face inpainting after the image is generated using whatever prompt you want. Seems worse to me tbh, if the lora is in the prompt it also take body shape (if you trained more than just the face) and hair into account, in adetailer it just slaps the face on and doesn't seem to change the hair. That way, you can increase weight and prevent colors from bleeding to the rest of the image, or use Loras/Embeddings separately for the main image vs overall its good follows prompts really well but it is shit with faces :( and no dont recommend me lora I have to keep my generations future proof easy to replicate. 4 - Inpaint only masked = 32 - Use separate width/height = 1024/1024 - Use separate steps = 30 - Use separate CFG scale = 9 Doesn't it also have to do with the source face. Just put the face name in the positive prompt for adetailer and that's the face you get. And it seems the open-source release will tl;dr just check the "enable Adetailer" and generate like usual, it'll work just fine with default setting. Hey, bit of a dumb issue but was hoping one of you might be able to help me. 5 model use resolution of 512x512 or 768 x 768. No way! Just today I was like "I need to learn the differences of control nets and I really dont understand IP adapters at all". Raw output, pure and simple TXT2IMG. Typically, folks flick on Face Restore when the face generated by SD starts resembling something you'd find in a sci-fi flick (no offense meant to Put ImageBatchToImageList > Face Detailer > ImageListToImageBatch > Video Combine. I’ve noted it includes the face and head but I sometimes don’t want to touch it. 12 Keyframes, all created in Stable Diffusion with temporal consistency. . Say you have a 1024x1024 image, and the face you want to fix takes up an 8th of the canvas. The only drawback is that it will significantly increase the No more struggling to restore old photos, remove unwanted objects, and fix faces and hands that look off in stable diffusion. They hardly do any places without adetailer. Inpainting or adetailer . (Siax should do well on human skin, since that is what it was trained on) Adetailer is a tool in the toolbox. THIS is perfect. Every time I use those two faceswapping extensions, the expressions are always the same generic smiling one. Sometimes when I struggle with bad quality deformed faces I use adetailer but it's not working perfectly because when img2img destroys the face, ADeailer can't help enough and creates strange bad results. All of this being said, Thanks :) Video generation is quite interesting and I do plan to continue. That is, except for when the face is not oriented up & down: for instance when someone is lying on their side, or if the face is upside down. I'm used to generating 512x512 on models like cetus, 2x upscale at 0. 22 votes, 25 comments. I tried to upscale a low-res in img2img, with adetailer on, still doesn't do much. The next version of Stable Diffusion ("SDXL") that is currently beta tested with a bot in the official Discord looks super impressive! Here's a gallery of some of the best photorealistic generations posted so far on Discord. Reply reply thugcee The rabbit hole is pretty darn deep. Next if I want something simple or rapid inpainting, etc. On a 1. Hey AI fam, Working on finding the best SDV settings. Not sure what the issue is, I've installed and reinstalled many times, made sure it's selected, don't see any errors, and whenever I use it, the image comes out exactly as if I hadn't used it (tested with and without using the seed). if it just calls 1 prompt for all faces then nevermind) Not sure if adetailer works with SDXL yet (I assume it will at some point), but that package is a great way to automate fixing hands and faces via inpainting. A full-body image 512 pixels high has hardly more than 50 pixels for the face, which is not nearly enough to make a non-monstrous face. Try the face_yolov8 models, I believe N and S only differ by the size of the regions they detect. There are various models for ADetailer trained to detect different things such as Faces, Hands, Lips , Eyes , Breasts , Genitalia ADetailer is one of the most popular auto-masking and inpainting tools for Stable Diffusion WebUI. Use the ADetailer extension which can segment and fix If you are using automatic1111 there is the extension called Adetailer that help to fix faces and hands. I thought I'd share the truly miraculous results controlnet is able to produce with inapainting while we''re on the subject: As you can see, it's a picture of a human being walking with with a very specific pose because the inpainting model included in controlnet totally does things and it definitely works with inpainting now, like, wow, look at how muuch that works. Adetailer doesn't require an inpainting checkpoint or controlnet etc etc, simpler is better. For face work fine for hands is worst, hands are too complex to draw for an AI for now. 8) on the neg (lowering hands weight gives better hands, and I've found long lists of negs or embeddings don't rly improve the For the small faces, we say "Hey ADetailer, don't fix faces smaller than 0. Even if you run a 2x upscale before using ADetailer, that 8th size face is still After hitting "generate" stable diffusion always generates an image which is basically good, but in every text I read about inpaint the masked area gets replaced by the newly generated picture. When I enable Adetailer in tx2img, it fixes the faces perfectly. 5 is the earlier version that was (and probably still is) very popular. Stable Diffusion 1. I’m using Forge webui. I got the best effect with "img2img skip" option enabled in ADetailer, but then the rest of the image remains raw. It's too bad because there's an audience for an interface like theirs. Adetailer works though. It has it's uses, and many times, especially as you're moving to higher resolutions, it's best just to leverage inpaint, but, it never hurts to experiment with the individual inpaint settings within adetailer, I dont really know about hires fix upscaling though, mostly used models in chaiNNer straight. "s" (small) version of YOLO offers a balance between speed and accuracy, while the "n" (nano) version prioritizes faster ADetailer is an extension for the stable diffusion webui, designed for detailed image processing. SD Artists Browser - a Hugging Face Space by mattthew. Even when I input a LORA facial expression in the prompt, it doesn't do anything because the faceswap always happens at the end. It's not hidden in the Hires. 6 update, all I ever saw on at the end of my PNG Info (along with the sampling, cfg, steps, etc) was ADetailer model: face_yolov8n. This deep dive is full of tips and tricks to help you get the best results in your digital art. After generation, adetailer will find each face and then use a wildcard in the adetailer prompt, like __celeb__ assuming you have celeb defined as a wildcard. There are simpler nodes like the facerestore node which simply applies a pass that runs GPFGAN as well. 5. But when I enable controlnet and do reference_only (since I'm trying to create variations of an image), it doesn't use Adetailer (even though I still have Adetailer enabled) and the faces get messed up again for full body and mid range shots. I'll have to do some more tinkering but this def helps. Realistic Vision: Best realistic model for Stable Diffusion, capable of generating realistic humans. true. A reason to do it this way is that the embedding doesn’t In Automatic111. epi_noiseoffset - LORA based on the Noise Offset post for better contrast and darker images. Yes, you can use whatever model you want when running img2img; the trick is how much you denoise. Hands are still hit or miss, but you can probably cut the amount of nightmare fuel down a bit with this. Before the 1. 0, Turbo and Non-Turbo Version), the resulting facial skin texture tends to be excessively smooth, devoid of the natural imperfections and pores. parameters: time-traveling chef visits ancient civilizations, introducing durian in each era. There's still a lot of work for the package to improve, but the fundamental premise of it, detecting bad hands and then inpainting them to be better, is something that every model should be doing as a final layer until we get good enough hand generation that satisfies You paint it like it was a childish decision. I use SD. Reply This video is 2160x4096 and 33 seconds long. The way the second dev edited the project homepage to showcase nsfw uses was the childish move. 3, ADetailer dilate/erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0. AfterDetailer extension can fix faces, it's pretty good. 0) in negative prompt but the result is still bad, so hands are impossible /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Others are saying ADetailer, but without clarification, so let me offer that: ADetailer's fix is basically a form of img2img or inpainting. I activated Adetailer with a denoise setting of 0. But the details are a bit messy, and the face is a bit off. 4, ADetailer inpaint only masked: True /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The best use case is to just let it img2img on top of a generated image with an appropriate detection model, you can use the Img2Img tab and check "Skip img2img" for testing on a I'm wondering if it is possible to use adetailer within img2img to correct a previously generated ai image that has a garbled face + hands. I might be wrong. Sometimes also the clip skip layer. Say goodbye to manual touch-ups and discover how this game-changing extension simplifies the process, allowing you to generate stunning images of people with ease. This is a problem with so many open source things, they don't describe what the thing actually does That and the settings are configured in a way that is pretty esoteric unless you already understand what's going on behind them and what Restore face makes face caked almost and looks washed up in most cases it's more of a band-aid fix . Featuring. These parameters did not make the red box bigger. 4), (hands:0. Looking at the face very closely, you'll see that it doesn't produce a good face. Beautiful 3D wings. Things like having it only work on the largest face or forcing the bbox to be square would be nice. I tried increasing the inpaint padding/blur and mask dilation parameters (not knowing enough what they do). You'll get much better faces and it's easier to do things like get the right eye color without influencing the rest of the image with the eye color. You get a monstrosity. DreamShaper: Best Stable Diffusion model for fantastical and illustration realms and sci-fi scenes. . are complex and if they are too small in the image, they'll be distorted. Check out our new tutorial on using Stable Diffusion Forge Edition with ADetailer to improve faces and bodies in AI-generated images. If ur gettin faces on random things setting “mask only the top k largest” to 1 in the detection dropdown will detail only one. 5 inpainting checkpoint. Getting celeb faces right can be difficult. I also like the fact I was able to test PixArt Sigma with it. Effectively works as auto-inpainting on faces, hands, eyes, and body (haven't tried the last very often fwiw). Regional Prompter. Here's a link to a post that you can get the prompt from. I still did a fork of wildcards with my improvements For artists, writers, gamemasters, musicians, programmers, philosophers and scientists alike! The creation of new worlds and new universes has long been a key element of speculative fiction, from the fantasy works of Tolkien and Le Guin, to the science-fiction universes of Delany and Asimov, to the tabletop realm of Gygax and Barker, and beyond. But if a face if farther away, it doesn't seem to be able to make a good No ControlNet, No ADetailer, No LoRAs, No inpainting, No editing, No face restoring, Not Even Hires Fix!! (and obviously no spaghetti nightmare). Add "head close-up" to the prompt and with around 400 pixels for the face it will usually end up nearly perfect. But I am not sure if it's my specific model problem, or global problem. As I understand the adetailer need those pixels to have something to work with to diffuse and then create a new face. 0 Artistic Studies. ADetailer model 2nd: mediapipe_face_mesh_eyes_only, ADetailer prompt 2nd: "blue-eyes, hyper-detailed-iris, detail-sparkling-eyes, described as perfect-brilliant-marbles, round-pupil, sharp-eyelashes", ADetailer confidence 2nd: 0. 8)for the adetailer. I dont really know about hires fix upscaling though, mostly used models in chaiNNer straight. In concept, it should automatically detect the face region assuming you've selected the correct ADetailer Model, and resolve with the input you have listed in the ADetailer Positive Prompt area. I'm using SD1. - Detection model confidence threshold = 0. I have to wonder what the source you're using in reactor is. that will get you 90% of the way there. giving a prompt "a 20 year old woman smiling [SEP] a 40 year old man looking angry" will apply the first part to the first face (in the order they are processed) and the second part to the second face. It still has things I miss from A1111 (like the ADetailer extension, even if the segment syntax of Swarm is a good start). The postprocessing bit in Faceswaplab works OK, go to 'global processing options tab' and then click down where you have the option to set the processing to come AFTER ALL (so it adds this processing after the faceswap and upscaling) and then set denoising around 0. Adetailer problem, when I try to fix both face and hands, it quite often turned fingers and some other parts into face. I also noticed that faces are always bad in a scenario where both img resolution is low and face is not close to the "camera". So instead of saying "a woman", pick a random name "Jennifer South" and it should do a better job of repeating it. pt In my experience, this one is something of a blunt instrument. Me too had a problem with hands, tried Adetailer, impainting or use (hands:1. Basically, if you have low resolution, you will only get good faces if they are close. I always get great results performing an "only masked region" img2img inpainting pass on the face of characters. ADetailer face model auto detects the face only. The following has worked for me: Adetailer-->Inpainting-->inpaint mask blur, default is 4 I think. I would like to have it include the hair style. Powerful auto-completion and syntax highlighting Customizable dockable and floatable panels Open multiple workspaces that save and load from project files Stable diffusion needs a certain amount of pixel space in the image to produce good, coherent details. Copy the generation data and then make sure to enable HR Fix, ADetailer, and Regional prompter first to get the full data you're looking for. SDXL Artist Study | Weird Wonderful AI Art. I think if the author of a stable diffusion model recommends a specific upscaler, it should give good results, since I expect the author has I have very little experience here, but I trained a face with 12 photos using textual inversion and I'm floored with the results. Why don't you club tiled diffusion+ control net tile try that After Detailer (adetailer) is a Stable Diffusion Automatic11111 web-UI extension that automates inpainting and more. If the low 128px resolution of the reactor faceswap model is an issue for you (e. Stable Use ADetailer to automatically segment the face or body of your character and apply the LORA in ADetailer's positive prompt (but not the main model's positive prompt). You can do it easily with just a few clicks—the ADetailer(After Detailer) extension does it all adetailer will be the biggest difference in conjunction with hiresfix. But recently Matteo, the author of the extension himself (Shoutout to Matteo for his amazing work) made a video about character control of their face and clothing. Adetailer made a small splash when it first came out but not enough people know about it. And denoise value works best around 0. Without controlnet and adetailer a1111 is pointless, those 2 should be built in. Do you have any tips how could I improve this part of my workflow? Thanks! Adetailer: Great for character or facial LoRAs to get finer details while the main prompt can focus on broad strokes composition. Artist Studies | SDXL 1. pt. - ADetailer has at least 3 models each to reconstruct the face, the hands an the body, and has the possible use of its personal prompt (you know the prompt used for the image, but not the possible used in ADetailer) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. If you really wanted, you could use segs for detecting nose, lips, eyes, and auto-inpaint; but at that point we're comparing apples to oranges, since the topic was specifically skin texture. So like portraits with faces up close are perfect. Always struggling with the back of the hands even with Controlnet and ADetailer but give it a try. Using a workflow of txt2image prompt/neg without the ti and then adding the ti into adetailer (with the same negative prompt), I get I'm using ADetailer with automatic1111, and it works great for fixing faces. In the time it takes for Photoshop to even open, your work would already be complete in ComfyUI. Adetailer faces in Txt2Img look amazing, but in IMG2IMG look like garbage and I can't figure out why? Question - Help I'm looking for help as to what the problem may be, because using the same exact prompt as I do on Txt2Img, which gives me lovely, detailed faces, on IMG2IMG results in kind of misshapen faces with over large eyes etc. 4) for the facial Loras (Perfect Eyes, Characters, and Person Loras) in the initial prompt and a higher weight (0. it works ok with adetailer as it has option to use restore face after adetailer has done detailing and it can work on but many times it kinda do more damage to the face as it For clarity, if your prompt is Beautiful picture of __actors__, __detail__ and you put in adetailer face of __actors__ You will get the same actor name. It's basically a square box detection and will work 90% of the time with no issues. 5-0. You can turn on "Save Mask Previews" under the Adetailer tab in settings to see how the mask detects with different models (i. In the base image, SDXL produce a lot of freckles in the face. Adetailer model is for face/hand/person detection Detection threshold is for how sensitive it's detect (higher = stricter = less face detected / will ignore blurred face in background character) then mask that part Why not try using wildcard choices in the adetailer prompt. Now I'm seeing this: ADetailer model: face_yolov8n. I have a problem with Adetailer: When applying Adetailer for the face alongside XL models (for example RealVisXL v3. Hi guys, adetailer can easily fix and generate beautiful faces, but when I tried it on hands, it only makes them even worse. MisterRuffian's Latent Artist & Modifier Encyclopedia - Google Sheets. Faces, hands, etc. However if your prompt is Beautiful picture of __detail__, __actors__ and face of __actors__ in adetailer, you will NOT get the same prompt. Bemypony - best ADetailer (or another post-detailing option) - for post-processing the more sensitive parts like faces or hands, or to simply improve skin texture by tossing a different checkpoint at your overly polished primary checkpoint. Switched to comfy recently for fun but still miss A subreddit dedicated to helping those looking to assemble their own PC without having to spend weeks researching and trying to find the right parts. Glad you made this guide. Unsurprisingly, check the "restore faces" checkbox in the UI. e. pt, ADetailer model 2nd: hand_yolov8n. I even tried adetailer, but Roop always happens after adetailer, so it didn't help either. 2-0. Stable Diffusion: Trending on Art Station and other myths | by Adi Use one of the SD based upscaler alternatives so it knows what faces are supposed to look like, for example the "hires fix" option. pt, ADetailer confidence: 0. e. But a good source face is a good starting point. 4 denoise is a good base) and adetailer (downloaded through extensions tab, start w/ default settings, first face model) to start. I think if the author of a stable diffusion model recommends a specific upscaler, it should give good results, since I expect the author has done many tests. But in my case, 99% are seperate pictures with that resolution but are I tested a lower weight(~ 0. I was wondering if there’s a way to use Adetailer masking the body alone. Amazing. ADetailer is an extension for the stable diffusion webui, designed for detailed image processing. fix tab or anything. For the big faces, we say "Hey ADetailer, don't fix faces bigger than 15% of the whole puzzle!" We want ADetailer to focus on the larger faces. You can also customize further by including face/quality tags in the positive prompt. View community ranking In the Top 1% of largest communities on Reddit. I can just use roop for that with way less effort and mostly better results Here's the juice: You can use [SEP] to split your adetailer prompt, to apply different prompts to different faces. This way, I achieved a very I use After Detailer or Adetailer instead of face restore for nonrealistic images with great success imo and to fix hands. We’ve been hard at work building a professional-grade backend to support our move to building on Invoke’s foundation to serve businesses and enterprise with a hosted offering (), while keeping Invoke one of the best ways to self-host and create content. He didn't want his name associated to those purposes and as soon as journalists called him about promoting deep fakes and sexual assault, he Wondering how to change order of operations for running FaceSwapLab (or any face swap) and then ADetailer after? I want to run ADetailer (face) afterwards with a low denoising strength all in 1 gen to make the face details look better, and avoid needing a second workflow of inpainting after. SD seems good at locking in a particular face to any given name. We’re committed to building in OSS - We intend for solo Depends on the program you use, but with Automatic 1111 on the inpainting tab, use inpaint with -only masked selected. You can fix this by upscaling with hi-resfix or in img2img with low denoising which will result in the details being redone. Many SD models aren't great for that, though, as they rely on a VAE that'll lighten, darken, saturate or desaturate the image even at 0% denoising strength (so literally just VAE encoding/decoding and nothing else). 6, ADetailer use separate steps 2nd: True, ADetailer steps 2nd: 20, ADetailer use separate sampler 2nd: True Both are post-processing so it can affect it. I use ADetailer to find and enhance pre-defined features, e. With the new release of SDXL, it's become increasingly apparent that enabling this option might not be your best bet. run a generation with mediapipe model and then the same prompt/seed with face_yolo models to see the difference). Make sure it's turned off and in Adetailer there is option for restore face too. loras ruins that. Reply reply Top 1% Rank by size Iam not a pro but I think the adetailer get better if you upscale in the process or have a decent pixelcount from the beginning. Anime restyling with Stable Diffusion + Deforum + Controlnet. Reactor would be pointless if you're using adetailer. 21 votes, 29 comments. In auto1111 you do that sort of thing with inpainting The Invoke team has been relatively quiet over the past few months. Or it'll take a LoRA and it's positive prompt. Kudos to the developer, it definitely has great potential! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 15-20ish and add in your prompt etc, i found setting the sampler to Heun works quite well. This ability emerged during the training phase of the AI, and was not programmed by people. Hi, I created an extension to use Stable Diffusion Webui Api in Silly Tavern, I know it has its own, but I missed to be able to pass other parameters to the command to generate images, and to use the styles of the api, it is a test version even that is what I use now myself so You need to play with the negative prompting, CFG scale and sampling steps. After Adetailer face inpainting most of the freckles are gone. 4 denoise with 4x ultrasharp and an adetailer for the face. Then the result will be better, I think. But if the source face is blurry so will the result. However, if you are looking for facial control, you are better off with Adetailer and/or Reactor. No mask needed. Face restore doesn't work for anime faces. 6% of the whole puzzle!" That's like telling ADetailer to leave the tiny faces alone. There are various models for ADetailer trained to detect different things such as Faces, Hands, Goddess - most realistic lighting of all the models and top tier prompt adherence. Researchers discover that Stable Diffusion v1 uses internal representations of 3D geometry when generating an image. Which one is to be used in which condition or which one is better overall? They are Both are scaled-down versions of the original model, catering to different levels of computational resource availability. I already use Roop and ADetailer. There are plenty of tutorial on youtube and maybe they can get you some info you may have missed. That face is 256x256, and SD1. Is this possible within img2img or is the alternative just to use inpainting without adetailer? Hello everyone, I'm sure many of us are already using IP Adapter. 512 x 512 is usually too small to get a good face, not enough pixels. Posted by u/Hungry_Young_8498 - 4 votes and 11 comments Doing some testing it seems if i use adetailer and have that do restore faces after its pass its about 90% less and almost imperceptible but faces look good. While some models have been forced to produce one specific type of results, no matter the prompt (see most of the anime models or all the ones that produce the same face), others that are more capable of understanding the prompts have a “base style” that is neither /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It always brings out a lot of detail in the mouth and eyes and fixes any bad lines/brush strokes. Stable Diffusion needs some resolution to work with. Comfy is good for power users or those wanting to do things that require complex tasks (detail enhancement, etc). I use the first face model and default everything. It allows you control of where you want to place things in your image. I think the problem is the extensions are using onnxruntime but 1 of them is using onnxruntime gpu and the other onnxruntime (cpu) it makes a conflict. hi, I'm been experimenting and trying to migrate my workflow from AUTO1111 to Comfy, and now I'm stuck while trying to reproduce the ADetailer step I use in AUTO1111 to fix faces; I'm using the ImpactPack's FaceDetailer node, but no matter what, it doesn't fix the face and the preview image returns a black square, what I'm doing wrong? simply enable 2x hiresfix (4x ultrasharp, . I typically always do img2img after txt2img for resize and altering lighting/details etc with added prompts. It hasn't caused me any problems so far but after not using it for a while I booted it up and my "Restore Faces" addon isn't there anymore. Now I start to feel like I could work on actual content rather than fiddling with ControlNet settings to get something that looks even remotely like what I wanted. Adetailer does much better job no matter what distance the face is from the camera. 5 configuration setting. Paper: "Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model". It'll act after adetailer and is more likely to be causing this. This will draw a standard image, then inpaint the LORA character over the top (in theory). Once enabled, it can automatically detect, mask and enhance the faces with After Detailer (adetailer) is a Stable Diffusion Automatic11111 web-UI extension that automates inpainting and more. Her body shape is unevenly chubby, and her skin is prominently imperfect with blemishes and uneven texture. Detail Tweaker LoRA - LoRA for enhancing/diminishing detail while keeping the overall style/character; it works well with all kinds of base models (incl anime & realistic models)/style LoRA/character LoRA, etc. I keep getting that same smudgy grey blob result. if you have 'face fix' enabled in A1111 you have to add a face fix process using a node that can make use of face fixing models or techniques such as Impact Packs - face detailer or REACTOR face replacer node. Feel free to discuss remedies, research, technologies, hair transplants, hair systems, living with hair loss, cosmetic concealments, whether to "take the plunge" and shave your head, and how your treatment progress or shaved head or hairstyle looks. Posted by u/leonhart83 - 2 votes and 11 comments Tressless (*tress·less*, without hair) is the most popular community for males and females coping with hair loss. Is ADetailer the best method in that case too? Because right now I've tried img2img all day and faces are IMO not as good as they used to be with face restoration, there's something weird about the eyes and skin. Look at the prompt for the ADetailer (face)and you'll see how it separates out the faces. Otherwise, the hair outside the box and the hair inside the box are sometimes not in sync. However, the latest update has a "yolo world model", and I realised I don't know how to use the yolov8x and related models, other than in pre-defined models as above. These allow granular control for face swap (Reactor) or face cleanup and overall better looking faces (Adetailer). Sure the results are not bad, but its not as detailed, the skin doesn't look that natural etc. using face_yolov8n_v2, and that works fine. for example, in main prompt school, <lora:abc:1>, <lora:school_uniform:1> and in adetailer prompt school, <lora:abc:1> and of course it works well Adetailer itself as far as I know doesn't, however in that video you'll see him use a few nodes that do exactly what Adetailer does i. This is the best technique for getting consistent faces so far! Input image - John Wick 4: Output images: Input image - The Equalizer 3: How to fix yup adetailer plays good role,but what i observed is adetailer really works well for face than body For body I suggest DD(detection detailer) Tbh in your video control net tile results look better than tiled diffusion. Hi, I’m quite new on this. 35 and then batch-processed all the frames. 6 - Mask : Merge - Inpaint mask blur = 8 - Inpaint denoising strength = 0. Among the models for face, I found face_yolov8n, face_yolov8s, face_yolov8n_v2 and the similar for hands. g. List whatever I want on the positive prompt, (bad quality, worst quality:1. Upscale then restore face Different upscalers Upscale visibility Code/Former/GFPGAN and different weights. Question | Help I have tried different order/combo of model and detection model confidence threshold, no matter what I have adjusted, it is just heads everywhere. The ADetailer extension will automatically detect faces, so if you set it to face detect and the use a character/celeb embedding in the adetailer prompt it will swap the face out. Simple. As far as keeping the main features of the face intact, I don't know. Step 3: Making Sure ADetailer Understands Though after a face swap (with inpaint) I am not able to improve the quality of the generated faces. So turn both restore face option off. Nothing above helps to make it as sharp as the original. If the source face is a highly detailed close up as the destination face. Imagine you want to inpaint the face and thus have painted the mask on the face: the three options are: "Inpaint area: Whole picture" - the inpaint will blend perfectly, but most likely doesn't have the resolution you need for a good face Workflow: A thirty-year-old woman with exaggerated features to emphasize an 'ugly' appearance. anyone knows how to enable both at the same time? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I know this prob can't happen yet at 1024 but I dream of a day that Adetailer can inpaint only the irises of eyes without touching the surround eye and eyelids. 0 of Stability Matrix - a built-in Stable Diffusion interface powered by any running ComfyUI package. Add More Details - Detail Enhancer - analogue of Detail Tweaker. I've managed to mimic some of the extensions features in my Comfy workflow, but if anyone knows of a more robust copycat approach to get extra adetailer options working in ComfyUI then I'd love to see it. Currently I can't see a reason to go away from the default 2. I have my stable diffusion UI set to look for updates whenever I boot it up. How exactly do you use In this video, I demonstrate the incredible power of the Adetailer extension, which effortlessly enhances and corrects faces and hands produced by Stable Diffusion. hpvfxar wxph ujjm pez aelcs emng bjvwbee ezmnae aked zpyjrvo