Comfyui and Daz

I think AI is now ready to make renders "photorealistic"

For years there have been loads of requests to help make renders look like real life. Here is a very quick render (under 2 mins) put through Comfyui with the Flux1 model.

ai_test.png
1024 x 1024 - 1M
ai_test1.png
1024 x 1024 - 1M
«1

Comments

  • QuasarQuasar Posts: 638

    It's not 100% but it looks pretty good. It messed up with her left hand, and her shoulders are really sharp. It did add a lot of realism to most of the scene though. 

  • droidy001droidy001 Posts: 282

    Quasar said:

    It's not 100% but it looks pretty good. It messed up with her left hand, and her shoulders are really sharp. It did add a lot of realism to most of the scene though. 

    It was really just a proof of concept, it was done really quick, only about 10 mins from starting the Daz render to loading into ComfyUI and trying a couple of settings.

    With more time setting the scene and rendering then fine tuning Comfy It would be many times better.

  • SnowSultanSnowSultan Posts: 3,595
    edited October 7

    The problem with using AI this way is that it's inconsistent. That isn't a problem if you're making a single image, but if you want a recurring character, clothing, or an environment, AI is still unable to do it well. I tried this many, MANY times in the past with Stable Diffusion and Controlnet. There is definitely potential to using AI with 3D, as AI blows the doors off any renderer when it comes to hair and realistically folding clothing. This is actually what I hoped DAZ's AI would do, but I don't think there have been any real updates to it.

     

    edit: Quasar, you're still around? Good to see you.  :)

    Post edited by SnowSultan on
  • charlescharles Posts: 846
    edited October 7

     

    This is exactly what I've been working on for over a year now. If you're planning to create consistent characters, it's important to set everything up in advance, which deserves its own detailed thread.

    Speaking of which, check out this thread: Iray Photorealism Discussion.

    To start, you'll want to create a character design profile package with a variety of images. For headshots, ensure you include at least a frontal view, a 30-degree side view, and a full side profile. I recommend even more angles for better accuracy. Do the same for mid-body and full-body shots, including the clothing sets you plan to use. You can create these images using tools like MJ, SD, Daz, or even real photos. The headshots are especially useful when combined with ControlNet's Ipadapter plus Face or Face Plus 2—though I personally prefer the original version. Set the face images to "multiple" in independent control, which will get you quite far.

    For even better results (except for profile shots), you can use ReActor. Here, you’re limited to one image (the multiple option doesn't work), but the side profile image often works best. Make sure to uncheck "Swap in generated image" and check "Swap in source image." This way, the face swap happens before Ipadapter and other processing, allowing for detailed refinement by the AI. ReActor aligns the facial shape more realistically than a stylized Daz character's usually is, and Ipadapter doesn't do a good job on actually refitting the face shape, just at adding textural details.

    However, there are still challenges with head scaling and angles, which can make it difficult to keep characters consistent. I’ve developed an extension for Automatic1111 that helps with this, and I’ll have a new update tomorrow—version 1.2.0. If you're interested, I'd recommend waiting until that release. You can check it out here: FacePop GitHub. Basically what it does is detects faces in the img2img, crops them out to a new image, scales them to a specific image size, and rotates the head orientation to be as perfectly upright as possible. It then does an AI process on that, before returning it back to the original image, and allowing the full image to finally be processed but with a mask over the face area we just already processed so it doesn't get double processed. This way faces are always processed at a specific scale and best orientation. It can handle faces on the side or even upside down.

    I also plan to begin testing and adapting it to work with ForgeUI this week. It may or may not make its way to ComfyUI, but much of the backend could potentially be handled with a node layout. Still, the aggressive face detection I've implemented using Mediapipe and template masking is a key feature.

    I definitely want to get this working on ForgeUI so I can test Flux integration. Everything I’ve seen so far suggests it’s a fantastic model.

    Once that's done—depending on compatibility, likely in about a week—I’ll start a new thread and blog about "AI Bumping," which dives deeper into using AI to create consistent Daz characters. This will provide more detail than what I've shared here.

    This is almost my entire life for the last year, working to perfect this process and make apps for others to use to do the same. AI Bumping is the future of Daz art..no doubt here about it.

     

     

    Post edited by charles on
  • LauritaLaurita Posts: 222
    edited October 7

    I am following an artist on DeviantArt (damnmad660) who does this with great success.

    I tried my luck mysef but didn't succeed so far. However I think this is the way to go to improve render results.

    Post edited by Laurita on
  • FSMCDesignsFSMCDesigns Posts: 12,754

    charles said:

     

    I definitely want to get this working on ForgeUI so I can test Flux integration. Everything I’ve seen so far suggests it’s a fantastic model.

    Yep, Forge and Flux is an amazing combination  If anyone just wants to try Flux without installing Forge and all it's data, you can try it here https://www.krea.ai/home

  • droidy001droidy001 Posts: 282

    SnowSultan said:

    The problem with using AI this way is that it's inconsistent. That isn't a problem if you're making a single image, but if you want a recurring character, clothing, or an environment, AI is still unable to do it well. I tried this many, MANY times in the past with Stable Diffusion and Controlnet. There is definitely potential to using AI with 3D, as AI blows the doors off any renderer when it comes to hair and realistically folding clothing. This is actually what I hoped DAZ's AI would do, but I don't think there have been any real updates to it.

    Absolutly agree, It's no good for consistancy. I do think it's possible for someone with more experience and bigger compute power to create an ai model from daz content that would be consisant, and create a series of realistic  images from Daz renders.To be able to render a scene, then pass it through ai with a list of assets used, and get a consistant series of images would be great.

     

    For me though it's great. I can create a composition in Daz and pass  the (not perferct) result through ai to get much better images.

     

    I've only discovered Flux in the past couple of days. After the inital set up and a little bit of learning I'm getting reasonable (for me) results. The beauty of Flux is the plain text promps, add to that the ease of setting up the compesition in daz. It's a huge step forward for those of us who are artistically challenged.

  • droidy001droidy001 Posts: 282

    charles said:

     

    This is exactly what I've been working on for over a year now. If you're planning to create consistent characters, it's important to set everything up in advance, which deserves its own detailed thread.

    Speaking of which, check out this thread: Iray Photorealism Discussion.

    To start, you'll want to create a character design profile package with a variety of images. For headshots, ensure you include at least a frontal view, a 30-degree side view, and a full side profile. I recommend even more angles for better accuracy. Do the same for mid-body and full-body shots, including the clothing sets you plan to use. You can create these images using tools like MJ, SD, Daz, or even real photos. The headshots are especially useful when combined with ControlNet's Ipadapter plus Face or Face Plus 2—though I personally prefer the original version. Set the face images to "multiple" in independent control, which will get you quite far.

    For even better results (except for profile shots), you can use ReActor. Here, you’re limited to one image (the multiple option doesn't work), but the side profile image often works best. Make sure to uncheck "Swap in generated image" and check "Swap in source image." This way, the face swap happens before Ipadapter and other processing, allowing for detailed refinement by the AI. ReActor aligns the facial shape more realistically than a stylized Daz character's usually is, and Ipadapter doesn't do a good job on actually refitting the face shape, just at adding textural details.

    However, there are still challenges with head scaling and angles, which can make it difficult to keep characters consistent. I’ve developed an extension for Automatic1111 that helps with this, and I’ll have a new update tomorrow—version 1.2.0. If you're interested, I'd recommend waiting until that release. You can check it out here: FacePop GitHub. Basically what it does is detects faces in the img2img, crops them out to a new image, scales them to a specific image size, and rotates the head orientation to be as perfectly upright as possible. It then does an AI process on that, before returning it back to the original image, and allowing the full image to finally be processed but with a mask over the face area we just already processed so it doesn't get double processed. This way faces are always processed at a specific scale and best orientation. It can handle faces on the side or even upside down.

    I also plan to begin testing and adapting it to work with ForgeUI this week. It may or may not make its way to ComfyUI, but much of the backend could potentially be handled with a node layout. Still, the aggressive face detection I've implemented using Mediapipe and template masking is a key feature.

    I definitely want to get this working on ForgeUI so I can test Flux integration. Everything I’ve seen so far suggests it’s a fantastic model.

    Once that's done—depending on compatibility, likely in about a week—I’ll start a new thread and blog about "AI Bumping," which dives deeper into using AI to create consistent Daz characters. This will provide more detail than what I've shared here.

    This is almost my entire life for the last year, working to perfect this process and make apps for others to use to do the same. AI Bumping is the future of Daz art..no doubt here about it.

     

     

    Really looking forward to seeing you result. I hope your hard work pays off.

  • MasterstrokeMasterstroke Posts: 1,983
    edited October 7

    Right now the AI is obviously not there yet.
    Going by these images, the AI is changing too much and improving too little.
    It changes to much of the original character and makes it someone else, which is a no go for me. 
    An AI post render realism filter should fix, what the render engine fails to do. Accurate lights, accurate shaders- especially accurate skin and hair shaders.
    Nothing what the AI does here, adds more realism to the scene.
    So, not for me.

    Post edited by Masterstroke on
  • droidy001droidy001 Posts: 282

    Laurita said:

    I am following an artist on DeviantArt (damnmad660) who does this with great success.

    I tried my luck mysef but didn't succeed so far. However I think this is the way to go to improve render results.

    If you haven't tried Flux I'd suggest giving it a go, It's such a leap forward.I'm by no means an expert in ai image generation but as you see from my original post it has so much potential.

  • FSMCDesignsFSMCDesigns Posts: 12,754
    edited October 7

    droidy001 said:

    I think AI is now ready to make renders "photorealistic"

    For years there have been loads of requests to help make renders look like real life. Here is a very quick render (under 2 mins) put through Comfyui with the Flux1 model.

    Been that way for awhile now, but as noted, character consistancy is harder, especially for newer users since there are ways to achieve it but it takes work and some skill.

    If you just want to enahnce a render and make it more realistic, I use Krea https://www.krea.ai/home their enhancer is just the best in my experience.

    For other render enhancing, I would suggest Fooocus or Forge over Comfy or Automatic 1111 since the interface is so much better

    here is an example of using inpainting in Foocus with a render to make it more realistic

    Did a quick render of a tan woman in a purple bikini, but i want her face to be more realistic.

    Load the image up in Fooocus under the inpainting tab, select just the face area, set the right parameters (detailed female face) and let the AI do it's thing

    You can change any part of a render with inpainting. Say I want new hair, I am thinking a messy platuinum blond updo, (no need for OOT hair, LOL)

    What about a new top, hmm, a tight white cropped tanktop would be nice.

     

    2024-10-07_08-15-03_6205.png
    857 x 849 - 1M
    babehotinpaint.jpg
    1551 x 933 - 533K
    babehot.jpg
    857 x 849 - 749K
    2024-10-07_08-44-04_2975.png
    857 x 849 - 1M
    newtop.png
    857 x 849 - 3M
    Post edited by FSMCDesigns on
  • FSMCDesignsFSMCDesigns Posts: 12,754

    Masterstroke said:

    Right now the AI is obviously not there yet.
    Going by these images, the AI is changing too much and improving too little.
    It changes to much of the original character and makes it someone else, which is a no go for me. 
    An AI post render realism filter should fix, what the render engine fails to do. Accurate lights, accurate shaders- especially accurate skin and hair shaders.
    Nothing what the AI does here, adds more realism to the scene.
    So, not for me.

    For that kind of consistancy, you would probably have to make a LoRA of your character. That way you can generate them in any light or environment and keep that character. There are quite a few at CivitAI that are just for a certain character.

  • charlescharles Posts: 846
    edited October 7

    When it comes to AI Bumping, try and think of Daz as a control net, keep character features for the face minimal, and use a detailed character profile package passed through the IPDapter control net.

    controlnet1.png
    789 x 924 - 411K
    zz_a2_bd_a1a3.png
    1440 x 1080 - 2M
    zz_c2_bd-a1a3.png
    1440 x 1080 - 2M
    bd-a1a3.png
    1440 x 1080 - 2M
    Post edited by charles on
  • charlescharles Posts: 846

    FSMCDesigns said:

    droidy001 said:

    I think AI is now ready to make renders "photorealistic"

    For years there have been loads of requests to help make renders look like real life. Here is a very quick render (under 2 mins) put through Comfyui with the Flux1 model.

    Been that way for awhile now, but as noted, character consistancy is harder, especially for newer users since there are ways to achieve it but it takes work and some skill.

    If you just want to enahnce a render and make it more realistic, I use Krea https://www.krea.ai/home their enhancer is just the best in my experience.

    For other render enhancing, I would suggest Fooocus or Forge over Comfy or Automatic 1111 since the interface is so much better

    here is an example of using inpainting in Foocus with a render to make it more realistic

    Did a quick render of a tan woman in a purple bikini, but i want her face to be more realistic.

    Load the image up in Fooocus under the inpainting tab, select just the face area, set the right parameters (detailed female face) and let the AI do it's thing

    You can change any part of a render with inpainting. Say I want new hair, I am thinking a messy platuinum blond updo, (no need for OOT hair, LOL)

    What about a new top, hmm, a tight white cropped tanktop would be nice.

     

     

    Instead of inpainting just the face, I would just img2img the entire picture. The right model will fix the entire skin texture especially the specular to make it really look more realistic overall. However you do have details like those chains and tats that may get lost, so what I do (and this helps restore things like fingers and rings and little details from your daz image you want to keep) is take it into PS and place the daz image as the bottom layer, place the processed image on top, and erase on the top layer anything you want to restore back to the daz source.  This is my workflow, it is a bit more work, but not a lot, and you will get amazing results.

  • LauritaLaurita Posts: 222

    charles said:

    The right model will fix the entire skin texture 

    Can you recommend a model? 

  • charlescharles Posts: 846
    edited October 7

    If working with SD1.5, EpicPhotgasm, I prefer the Z-Universal version. https://civitai.com/models/132632/epicphotogasm

    it is very solid for this type of workflow.

    The author has some XL and Flux models too, but I have yet to try them. I plan to do testing on Flux tonight.

     

    Post edited by charles on
  • charlescharles Posts: 846
    edited October 8

    So ForgeUI seems to maintaine it's own Control Net as a built in, and the developer had Multiple-Inputs at one point, removed it, and now seems reluctant to add it back in.

    From this disucussion: https://github.com/lllyasviel/stable-diffusion-webui-forge/pull/264

    This workflow for the best consistancy requires at least 3 or more control images. I will see about a new extension for it for Control Net injeciton, I'll have to dump and analyze the parameters feed. I already have one yet unpublished such injector for AUTO1111, so hopefully it's pretty similar.

     

    Post edited by charles on
  • marblemarble Posts: 7,500

    charles said:

     

    This is exactly what I've been working on for over a year now. If you're planning to create consistent characters, it's important to set everything up in advance, which deserves its own detailed thread.

     

     

    Oh yes please - a thread dedicated to using DAZ Studio and AI together would be just what I need right now as I am just starting to get into that.  

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,206

    marble said:

    charles said:

     

    This is exactly what I've been working on for over a year now. If you're planning to create consistent characters, it's important to set everything up in advance, which deserves its own detailed thread.

     

     

    Oh yes please - a thread dedicated to using DAZ Studio and AI together would be just what I need right now as I am just starting to get into that.  

    we have one

    https://www.daz3d.com/forums/discussion/591121/remixing-your-art-with-ai#latest 

    I post occasionally

    I am using AI to animate renders more often now too and actually exploring paid options

    as unlike purely generative AI images I feel some validation for animating something I have created myself

  • RobotHeadArtRobotHeadArt Posts: 917

    charles said:

     

    However, there are still challenges with head scaling and angles, which can make it difficult to keep characters consistent. I’ve developed an extension for Automatic1111 that helps with this, and I’ll have a new update tomorrow—version 1.2.0. If you're interested, I'd recommend waiting until that release. You can check it out here: FacePop GitHub. Basically what it does is detects faces in the img2img, crops them out to a new image, scales them to a specific image size, and rotates the head orientation to be as perfectly upright as possible. It then does an AI process on that, before returning it back to the original image, and allowing the full image to finally be processed but with a mask over the face area we just already processed so it doesn't get double processed. This way faces are always processed at a specific scale and best orientation. It can handle faces on the side or even upside down.

    This extensions looks super useful, I will have to try it out later today.  Have you considered getting it added to the list of extensions https://github.com/AUTOMATIC1111/stable-diffusion-webui-extensions ?

  • charlescharles Posts: 846

    RobotHeadArt said:

    charles said:

     

    However, there are still challenges with head scaling and angles, which can make it difficult to keep characters consistent. I’ve developed an extension for Automatic1111 that helps with this, and I’ll have a new update tomorrow—version 1.2.0. If you're interested, I'd recommend waiting until that release. You can check it out here: FacePop GitHub. Basically what it does is detects faces in the img2img, crops them out to a new image, scales them to a specific image size, and rotates the head orientation to be as perfectly upright as possible. It then does an AI process on that, before returning it back to the original image, and allowing the full image to finally be processed but with a mask over the face area we just already processed so it doesn't get double processed. This way faces are always processed at a specific scale and best orientation. It can handle faces on the side or even upside down.

    This extensions looks super useful, I will have to try it out later today.  Have you considered getting it added to the list of extensions https://github.com/AUTOMATIC1111/stable-diffusion-webui-extensions ?

    I just tested it last night on ForgeUI, had to make one edit to get it working on it. I probably will ask to have it added to the extension list but not yet. I have 4 other extensions about to go up on my github. You can currently download it directly and stick it in extensions or use the built in installer and point to the github page. I have a few extra things I need to add into though, eye restoration (which will remove the processed eyes and replace it with your original daz eyes, and do a seperate specific ai bump on those if you want). I also had to remove for now support for AfterDetailer, which I want to put back in, which was done by adding a very last process call, but for some reason this throws some kind of odd GPU bug. So more research and testing for that is needed.

     

  • charlescharles Posts: 846

    marble said:

    charles said:

     

    This is exactly what I've been working on for over a year now. If you're planning to create consistent characters, it's important to set everything up in advance, which deserves its own detailed thread.

     

     

    Oh yes please - a thread dedicated to using DAZ Studio and AI together would be just what I need right now as I am just starting to get into that.  

    Absolutely, let me just get some testing done.

    Tested Flux, can't say I'm a fan. Maybe more testing or wait for someone to create some better mixed model.

    Testing SDXL, so far epiCRealism XL is looking very promising.

  • QuasarQuasar Posts: 638

    SnowSultan said:

    edit: Quasar, you're still around? Good to see you.  :)

    Thanks! *wave* Still here, checking in on the regular. I haven't been replaced by AI yet.

  • GreybroGreybro Posts: 2,502

    Those are some Wow results there.
    Enhanced

     

    FSMCDesigns said:

    droidy001 said:

    I think AI is now ready to make renders "photorealistic"

    For years there have been loads of requests to help make renders look like real life. Here is a very quick render (under 2 mins) put through Comfyui with the Flux1 model.

    Been that way for awhile now, but as noted, character consistancy is harder, especially for newer users since there are ways to achieve it but it takes work and some skill.

    If you just want to enahnce a render and make it more realistic, I use Krea https://www.krea.ai/home their enhancer is just the best in my experience.

    For other render enhancing, I would suggest Fooocus or Forge over Comfy or Automatic 1111 since the interface is so much better

    here is an example of using inpainting in Foocus with a render to make it more realistic

    Did a quick render of a tan woman in a purple bikini, but i want her face to be more realistic.

    Load the image up in Fooocus under the inpainting tab, select just the face area, set the right parameters (detailed female face) and let the AI do it's thing

    You can change any part of a render with inpainting. Say I want new hair, I am thinking a messy platuinum blond updo, (no need for OOT hair, LOL)

    What about a new top, hmm, a tight white cropped tanktop would be nice.

     

    Devil-enhanced.png
    2048 x 2048 - 5M
  • GreybroGreybro Posts: 2,502

    charles said:

    When it comes to AI Bumping, try and think of Daz as a control net, keep character features for the face minimal, and use a detailed character profile package passed through the IPDapter control net.

    Ummm, amazing results here.
  • charlescharles Posts: 846
    edited October 9

    I did finally get an extension to inject Multi-Input's for ForgeUI's built in ControlNet.

    Here: https://github.com/TheCodeSlinger/ForgeUI-IPAdapter-MultiInput

    However the way their CN handles the adapters is less than impressive and so until they update it my workflow only really is viable with A1111.

    FacePop will work with both but since you can't use MultiInputs on ForgeUI I don't see a reason to for my purposes.

    One thing I dislike about A1111's CN is that if you have to restart the console,  it will show your CN settings in the UI, but will ignore it. Also setting up CN's each time I load is cumbersome. Plus I have yet to see a SAVE system for A1111 that will also include your multi-images, they get dropped because they are stored not as a link to the file but as part of the "p" paramater as a numpy converted to a Base64 string, which gets reset if you restart the console. SO I made an extension that will retain these settings. This is also a great way to save time by save predesigned control nets. 

    Here: https://github.com/TheCodeSlinger/A1111-Retain-ControlNet

    I also just released an Overlay PNG with transparency extension that works for both A1111 and ForgeUI, and also goes hand in hand with FacePop. The reason for this is if you have a character say wears glasses, well generative AI doesn't play well with that, making weird shapes of glasses and even ignoring them sometimes. In Daz you render the glasses on the character seperatly as a beauty canvas with alpha checked, and the glasses set to the only node. Here with this exention you get to process the face first without glasses, overlay the glasses and give it a final process nudge to blend together. 

    Here: https://github.com/TheCodeSlinger/SD-WebUI-Overlay-PNG

     

    Post edited by charles on
  • marblemarble Posts: 7,500

    OK - this is all still a bit bewildering for me so if anyone can point me to a good newbie tutorial, I'd appreciate it.

    So far I have ComfyUi installed and prior to that I was playing with a stand-alone version of Live Portrait. I got some of my DAZ Studio characters to talk to me and it was impressive until it came to the head movements. Then it got all messed up.

    So Ii started looking at Stable Diffusion which led me to Flux and then Forge and the problem for me is that there does seem to be a confusing bunch of forks for each of these AI apps and interfaces. 

    What I want to do is use my DS characters as "seeds" (is that the correct term - I also need to become fluent in this new language) and also apply poses - maybe those I have used in DAZ Studio, and clothing likewise. Animation would be a huge plus as would talking and voice/lip synch. I believe that all of this is possible but am struggling to find a starting point.

  • charlescharles Posts: 846

    marble said:

    OK - this is all still a bit bewildering for me so if anyone can point me to a good newbie tutorial, I'd appreciate it.

    So far I have ComfyUi installed and prior to that I was playing with a stand-alone version of Live Portrait. I got some of my DAZ Studio characters to talk to me and it was impressive until it came to the head movements. Then it got all messed up.

    So Ii started looking at Stable Diffusion which led me to Flux and then Forge and the problem for me is that there does seem to be a confusing bunch of forks for each of these AI apps and interfaces. 

    What I want to do is use my DS characters as "seeds" (is that the correct term - I also need to become fluent in this new language) and also apply poses - maybe those I have used in DAZ Studio, and clothing likewise. Animation would be a huge plus as would talking and voice/lip synch. I believe that all of this is possible but am struggling to find a starting point.

    I agree with you on LivePortrait. And if you just want to do some random AIBumping then just about anything will do like Flux as the OP did.

    If you are looking for "consistant very GOOD looking REALISTIC characters" that's my workflow.

    If you are wanting my workflow you will need to use Automatic1111's Stable-Diffusion-WebUI. I had hopes for Flux and Forge, but no, they are not ready for this kind of thing yet.

    As far as ComfyUI goes, it has the potential but I find a bit more tedious and unstable for my workflow, and I haven't seen as good of an inpainting tool for it yet...but that was several months ago.

    I'll begin with you will need a VERY good GPU with 8+ VRAM, more like 24+, but I am going to bet being a long time Daz guy you do.

    Install Automatic1111

    https://www.youtube.com/watch?time_continue=5&v=3cvP7yJotUM&embeds_referring_euri=https://www.bing.com/&embeds_referring_origin=https://www.bing.com&source_ve_path=Mjg2NjY

    You will want to use this modified version of SD1.5 the model: https://civitai.com/models/132632/epicphotogasm, at the top you will see the different versions, I would recommend Z-Universal (not the inpainting version) download it and put it in "stable-diffusion-webui\models\Stable-diffusion"

    Install Control Net:

    https://github.com/Mikubill/sd-webui-controlnet?tab=readme-ov-file

    and download some controlnet modules like the IPAdapter SD1.5/SDXL ones and the IPAdapter FaceID [SD1.5/SDXL] for sure. But I would also get the ones for Canny, OpenPose and any others you might be interested in from here that work with sd1.5: https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main ; and from here for SDXL: https://huggingface.co/xinsir ; ... I know it's all a bit of a mess.

    I would recommend also getting Reactor here: https://github.com/Gourieff/sd-webui-reactor

    [Do IT!]

    Once you have all that setup test by running it from you're webui-user.bat, switch the checkpoint model (very top) to the Z-Universal model.

    Switch to Img2Img tab

    Put the Daz image you want to do in the image section.

    Sampling Method: I recommend "Restart"

    Scheule type should be "Automatic" if not an option use "Karras"

    Sampling Steps: 33-40

    In the Resize tab, be sure to click the yellow triangle, that will resize to your Daz image. If it's more than like 1440 width or height, switch to Resize by and scale it down to that or lower.

    CFG Scale, I recommend 6-8 (just leave it at default 7 for now)

    Denoising!!! This is where you will play with testing over and over again, start Denoising Strength at .4

    Seed -1 (which will grab a random one)

    Generate Button! See what you get. I'll follow up with another post on using the control nets and reactor next. But that's what you need minimum to get started.

     

     

     

  • marblemarble Posts: 7,500

    I wish there was a Thank-you button for these posts, but thank you anyway.

    I have an RTX 4080 with 16GB - that will have to do because I can't afford to change up now.

    On reading your workflow I think I might have installed the wrong setup and maybe should delete it all and start again. I have downloaded several Flux packages but not yet installed them. I also have Pinokio but I'm not sure what that gives me over and above the stand-alone versions. I have Comfyui but it is a portable version - self-contained. So do you think I should dump it all and start over? All of it has been free downloads so I'm not losing anything.

  • charlescharles Posts: 846
    edited October 10

    I feel like maybe I hijacked this thread so moving to https://www.daz3d.com/forums/discussion/704766/chuck-s-ai-bumping-thread

    Post edited by charles on
Sign In or Register to comment.