How do you use Daz3D products and AI together? My two cents

DiomedeDiomede Posts: 15,169
edited April 3 in Daz AI Studio

How have you been combining Daz3D products and AI together?

Various forms of AI are being incorporated into digital art tools.  This is true with or without Daz3D.  This is true with or without Midjourney, Dall-E, etc.  

(1) Postwork processing?  Digital image editors are incorporating AI in the background for generative fills.  So no matter what Daz3D does, many 2D renders of its assets will be postworked with some form of AI by some users, perhaps most.  Filter Forge is essentially an AI image processor.

(2) Pre-work?  Does anyone use Daz3D products to 'block out' a scene?  An example would be rendering a base scene and then using Controlnet to detect the basic lines, or the depth, or the... for the composition of the desired final output.

(3) Backgrounds and Shadow Catchers?  Some people may use an AI processor to generate a background farm or office or ship or..., and then render their characters using Iray, and then integrate the two in an image editor.

(4) Iterative?  One could use AI to block out a basic scene, use that as a backdrop for Daz3D character renders, which are then sent back to AI, and then reloaded in Daz Studio, then...

(5) Other?

I am glad to see Daz3D make their first stab at incorporating their large library of assets.  I call this 'first stab' because I am not confident this is the way to go.  It seems to me like they will be giving themselves the incentive to shut off their library from use with the other big AI programs.  Adobe, OpenAi, etc. ain't Smith Micro / Poser.  The Daz library of assets, as impressive as it is, is no match for Adobe's stock photo collection.

--- Another work flow is to start with low res objects, not highly detailed realistic objects. For this, I used my own low res custom female figure, my own simple terrain (2), and my own low res windmill.  I used Stable Diffusion to go from my low-res scene to the output, relying on SD's extension Controlnet to detect edges and depth from my 3D renders. So I know I can do anything without the genesis figures and their associated content.  But being able to do anything is not the same as wanting to do without the library of assets.  So I am going to be curious to see where this all leads.  Will Daz content library be the front end of art projects to block them out?  Or, the database used to go from low res to high res?  Or the back end to add fine detail?  All?  Something else?

So, how do YOU use Daz3D products and AI together?

 

 

Side By Side Windmill Pinup.png
1024 x 768 - 746K
Post edited by Diomede on

Comments

  • DiomedeDiomede Posts: 15,169
    edited April 3

    My First DAZ AI prompt output

    My prompt was "A man rowing a boat on an ocean under a stormy sky"

    The result was

    dazai_8a33da12-f1f5-11ee-8f4c-8de68b29fa88.png
    1024 x 1024 - 1M
    Post edited by Diomede on
  • RobotHeadArtRobotHeadArt Posts: 917

    Diomede said:

    How have you been combining Daz3D products and AI together?

    Various forms of AI are being incorporated into digital art tools.  This is true with or without Daz3D.  This is true with or without Midjourney, Dall-E, etc.  

    (1) Postwork processing?  Digital image editors are incorporating AI in the background for generative fills.  So no matter what Daz3D does, many 2D renders of its assets will be postworked with some form of AI by some users, perhaps most.  Filter Forge is essentially an AI image processor.

    (2) Pre-work?  Does anyone use Daz3D products to 'block out' a scene?  An example would be rendering a base scene and then using Controlnet to detect the basic lines, or the depth, or the... for the composition of the desired final output.

    (3) Backgrounds and Shadow Catchers?  Some people may use an AI processor to generate a background farm or office or ship or..., and then render their characters using Iray, and then integrate the two in an image editor.

    (4) Iterative?  One could use AI to block out a basic scene, use that as a backdrop for Daz3D character renders, which are then sent back to AI, and then reloaded in Daz Studio, then...

    (5) Other?

    I am glad to see Daz3D make their first stab at incorporating their large library of assets.  I call this 'first stab' because I am not confident this is the way to go.  It seems to me like they will be giving themselves the incentive to shut off their library from use with the other big AI programs.  Adobe, OpenAi, etc. ain't Smith Micro / Poser.  The Daz library of assets, as impressive as it is, is no match for Adobe's stock photo collection.

    --- Another work flow is to start with low res objects, not highly detailed realistic objects. For this, I used my own low res custom female figure, my own simple terrain (2), and my own low res windmill.  I used Stable Diffusion to go from my low-res scene to the output, relying on SD's extension Controlnet to detect edges and depth from my 3D renders. So I know I can do anything without the genesis figures and their associated content.  But being able to do anything is not the same as wanting to do without the library of assets.  So I am going to be curious to see where this all leads.  Will Daz content library be the front end of art projects to block them out?  Or, the database used to go from low res to high res?  Or the back end to add fine detail?  All?  Something else?

    So, how do YOU use Daz3D products and AI together?

     

    I cover some of these workflows in my tutorial at the other store.  There are really many opportunities for using Daz and Stable Diffusion and other AI tools together. Some other things:

    • Use Stable Diffusion Inpainting to fix pokethrough, fireflys, and other Daz Studio render glitches
    • Use Stable Diffusion Inpainting to turn Daz polygon strip hair into realistic looking hair
    • Use Stable Diffusion Inpainting to fill out empty scenes, adding many background elements that would take too much memory as instances in Daz Studio
    • Use Stable Diffusion to create textures to use on models in Daz Studio
    • Use Stable Diffusion Upscaler to take a smaller rendered Iray render and increase the resolution, avoiding lengthy render times in Daz Studio
  • DiomedeDiomede Posts: 15,169

    RobotHeadArt said:

    Diomede said:

    How have you been combining Daz3D products and AI together?

    Various forms of AI are being incorporated into digital art tools.  This is true with or without Daz3D.  This is true with or without Midjourney, Dall-E, etc.  

    (1) Postwork processing?  Digital image editors are incorporating AI in the background for generative fills.  So no matter what Daz3D does, many 2D renders of its assets will be postworked with some form of AI by some users, perhaps most.  Filter Forge is essentially an AI image processor.

    (2) Pre-work?  Does anyone use Daz3D products to 'block out' a scene?  An example would be rendering a base scene and then using Controlnet to detect the basic lines, or the depth, or the... for the composition of the desired final output.

    (3) Backgrounds and Shadow Catchers?  Some people may use an AI processor to generate a background farm or office or ship or..., and then render their characters using Iray, and then integrate the two in an image editor.

    (4) Iterative?  One could use AI to block out a basic scene, use that as a backdrop for Daz3D character renders, which are then sent back to AI, and then reloaded in Daz Studio, then...

    (5) Other?

    I am glad to see Daz3D make their first stab at incorporating their large library of assets.  I call this 'first stab' because I am not confident this is the way to go.  It seems to me like they will be giving themselves the incentive to shut off their library from use with the other big AI programs.  Adobe, OpenAi, etc. ain't Smith Micro / Poser.  The Daz library of assets, as impressive as it is, is no match for Adobe's stock photo collection.

    --- Another work flow is to start with low res objects, not highly detailed realistic objects. For this, I used my own low res custom female figure, my own simple terrain (2), and my own low res windmill.  I used Stable Diffusion to go from my low-res scene to the output, relying on SD's extension Controlnet to detect edges and depth from my 3D renders. So I know I can do anything without the genesis figures and their associated content.  But being able to do anything is not the same as wanting to do without the library of assets.  So I am going to be curious to see where this all leads.  Will Daz content library be the front end of art projects to block them out?  Or, the database used to go from low res to high res?  Or the back end to add fine detail?  All?  Something else?

    So, how do YOU use Daz3D products and AI together?

     

    I cover some of these workflows in my tutorial at the other store.  There are really many opportunities for using Daz and Stable Diffusion and other AI tools together. Some other things:

    • Use Stable Diffusion Inpainting to fix pokethrough, fireflys, and other Daz Studio render glitches
    • Use Stable Diffusion Inpainting to turn Daz polygon strip hair into realistic looking hair
    • Use Stable Diffusion Inpainting to fill out empty scenes, adding many background elements that would take too much memory as instances in Daz Studio
    • Use Stable Diffusion to create textures to use on models in Daz Studio
    • Use Stable Diffusion Upscaler to take a smaller rendered Iray render and increase the resolution, avoiding lengthy render times in Daz Studio

    Great info.  Thanks.  yes

  • MimicMollyMimicMolly Posts: 2,194

    I usually treat whatever random AI image I get (I have used a lite version of Stable Diffusion, then the DALL-E that Copilot uses, and Craiyon) as a "base", then render a DAZ human character that looks similar to the one in the AI image, and manually combine them with GIMP.

    In all, it's just been entertaining photomanipulation practice/doodles for me. If I wanted to create a more "heartfelt" illustration, I'd probably draw it by hand (with DAZ serving as a pose reference.) I could possibly base it on AI, but there's no fun in brainstorming in that.

  • tsroemitsroemi Posts: 2,744

    I agree with others here and elsewhere that this should be a feature within DS itself, for instance for backgrounds, changing the lighting, doing postprocessing on the render etc. That would be very useful! The way it is now, it seems like a nice, fun thing to play with, especially since DAZ say the images were sourced ethically. That's a good thing for sure. But how do I get it to include what's in my library?

  • alaltaccalaltacc Posts: 151

    I read somewhere that the model Daz trained was trained over SDXL. So, if this is true, the base model is the same as every other SDXL model, trained with images "on the wild". I stress that I don't agree with the view that this is a copyright infringement in itself (we coud have a long technical/philosophical discussion here and get nowhere as everyone else, I'm only saying this to make it clear that this is not a problem FOR ME), but if they really have used SDXL as a base, there's no difference to any other SDXL-based model.

  • linvanchenelinvanchene Posts: 1,382
    edited April 5

    My hope is that in a few years we can use all the licensed 3D models as input for generative AI.

    This would be a huge help to create consitent characters from different angles.

    ###

    For now I have been using Daz Studio to render out different types of images for img2img workflows with ComfyUI.

    Some examples:

    Face Swap

    I made a template scene in Daz Studio that creates portrait renders of licensed Daz characters without hair.

    Those renders are then used as source in Stable Diffusion to replace just the face of a character.

    This works similar to the Face Transfer option of Daz Studio.

    For research purposes I created a workflow to generate and save two versions of each image.

    One with a face generated by a Stable Diffusion checkpoint and one with a licensed Daz Character.

    Workflow example:

    Result with Joan 9 HD High Elf Add-On:

    Poses

    I setup a scene with one or two Genesis 9 default characters in Daz Studio.

    It seems sufficient to render the image in White Mode.

    That pose render can be used as input for different ComfyUI ControlNet nodes.

    Result with Xola 9 HD:

     

    Canny is a way to create a line art version when you want to capture more details for the pose. (example attached)

    OpenPose is an option if you are just interested in the pose and want Stable Diffusion to have more freedom. (example attached)

     

    Clothing

    I did some experiments with rendering out images of clothing from Daz Studio.

    With IP-Adapter you can then use them as img2img source.

     

    Some of those workflows require the characters to face the viewer to work properly.

    As soon as the head is turned to the side the faces become distorted or the clothing gets a more different look.

    ###

    Daz AI Studio could now provide an alternative option for those challenges.

    Guess we will have to test what happens when we stack LoRA of characters, hair and clothing together when those features become available as inputs...

    Profile shot generated in Daz AI Studio:

    Some additional examples are attached:

    ComfyUI_ReActor_v1001.png
    1728 x 2160 - 2M
    portrait_Joan_CS05AOD_wh-shirt_725706329412037_00001_1755x2160_mq.jpg
    1755 x 2160 - 217K
    pose_Daz-Studio 1728x2160 mq.jpg
    1728 x 2160 - 147K
    Xola9_Daz3D_Iray_v1001 1728x2160 mq.jpg
    1728 x 2160 - 230K
    gym_pose-SDI-StandB_xola_678554335634103_00001_ 1755x2160 mq.jpg
    1755 x 2160 - 282K
    002_preview-render_white-mode 884x1080 mq.jpg
    864 x 1080 - 82K
    004_control-net_canny-edge 684x1080.jpg
    864 x 1080 - 94K
    v9_uconv_02_oops 1728x2160.jpg
    1728 x 2160 - 118K
    ComfyUI_temp_rpqpj_00003_.png
    512 x 640 - 36K
    Post edited by linvanchene on
  • FantastArtFantastArt Posts: 313

    linvanchene said:

    My hope is that in a few years we can use all the licensed 3D models as input for generative AI.

    This would be a huge help to create consitent characters from different angles.

    ###

    For now I have been using Daz Studio to render out different types of images for img2img workflows with ComfyUI.

    Some examples:

    Face Swap

    I made a template scene in Daz Studio that creates portrait renders of licensed Daz characters without hair.

    Those renders are then used as source in Stable Diffusion to replace just the face of a character.

    This works similar to the Face Transfer option of Daz Studio.

    For research purposes I created a workflow to generate and save two versions of each image.

    One with a face generated by a Stable Diffusion checkpoint and one with a licensed Daz Character.

    Workflow example:

    Result with Joan 9 HD High Elf Add-On:

    Poses

    I setup a scene with one or two Genesis 9 default characters in Daz Studio.

    It seems sufficient to render the image in White Mode.

    That pose render can be used as input for different ComfyUI ControlNet nodes.

    Result with Xola 9 HD:

     

    Canny is a way to create a line art version when you want to capture more details for the pose. (example attached)

    OpenPose is an option if you are just interested in the pose and want Stable Diffusion to have more freedom. (example attached)

     

    Clothing

    I did some experiments with rendering out images of clothing from Daz Studio.

    With IP-Adapter you can then use them as img2img source.

     

    Some of those workflows require the characters to face the viewer to work properly.

    As soon as the head is turned to the side the faces become distorted or the clothing gets a more different look.

    ###

    Daz AI Studio could now provide an alternative option for those challenges.

    Guess we will have to test what happens when we stack LoRA of characters, hair and clothing together when those features become available as inputs...

    Profile shot generated in Daz AI Studio:

    Some additional examples are attached:

    this is the most interesting thing I read today so far :-)

  • My only hope is that, FWIW, if they ever bring AI features into Studio itself, I hope they'll have a universal off switch. Adobe doesn't and it's one of the things that sucks about using Photoshop these days. Those of us who don't want to use the features shouldn't have to dance around them. Just let us turn them off altogether.

  • 844d1cc615844d1cc615 Posts: 86

    echristopherclark said:

    My only hope is that, FWIW, if they ever bring AI features into Studio itself, I hope they'll have a universal off switch. Adobe doesn't and it's one of the things that sucks about using Photoshop these days. Those of us who don't want to use the features shouldn't have to dance around them. Just let us turn them off altogether.

    Agree, but having to opt-in not having to opt-out.  I don't want to tell the waiter all the food I don't want.  I'd prefer to just say what I do want.  Takes less time.

  • Siciliano1969Siciliano1969 Posts: 433
    edited April 7

    Here is my Gen 8.1 Flight Attendant rendered in DAZ Iray (top) and the same rendered image brought into Leonardo AI Img to Img with a simple prompt (below).  

    imageimage

     

    Stew DAZ.jpg
    1500 x 1500 - 891K
    Default_Professional_photo_of_a_beautiful_25_year_old_stewarde_3.jpg
    1344 x 1344 - 829K
    Post edited by Siciliano1969 on
  • dennisgray41dennisgray41 Posts: 803

    Diomede said:

    My First DAZ AI prompt output

    My prompt was "A man rowing a boat on an ocean under a stormy sky"

    The result was

    Very nice. Love the sky. Wondering why he is rowing away from the ship and tossing hir oar away. Is Daz AI suicidal? or maybe homicidal? Do they still use the 3 laws of robotics?

Sign In or Register to comment.