Comparing Daz AI to Midjourney - thoughts

LiliumVALiliumVA Posts: 17
edited April 4 in Daz AI Studio

All I am seeing, despite upgrading, is two options when it comes to prompts, and only an extension of the generated work. No enviroment, or other options. Anyway...*edit* I went right to the AI studio, and had not seen the 'about' page first. I sub'd there. So, I've paid $4 for this and not all features are there. Again, what is the point at this point?

So I have used Midjourney for a while. I also use ChatGPT for prompt descriptions when generating specific works. That all said, I wanted to see how it Daz's AI studio would compare to Midjourney with a pretty basic prompts.I did use Victora 9, and without. I selected the best, most accurate, and realistic (in terms of what should be expected), image generated. 

The prompt used :

Gorgeous elf female with chesnut updo hair and blue eyes wearing a black sparkly dress

The results:

DAZ AI

 

 

Midjourney

 

Another prompt :

Model in a long black dress, in the style of futuristic imagery, dark cyan and red

 

Daz AI:

 

 

Midjourney:

 

Thoughts:

I understand this is still in BETA. I understand there will be a learning process for the AI. I completely get that. However, SD is an absolutely terrible AI model to pair Daz with. It lacks the capacity to render faces, and body parts properly. Granted, the first image with Victoria is good, but it is something that can easily be done within the actual Daz software, so it begs the question of what can this be used for? How is this worth it to consumers that use Daz, if the results are sub-par compared to competitors? What is beneficial to consumers if the overal result of the product is more work with post-production, and continued generation to get accurate results?

I was not able to use environment, inpainting or anything else, despite(again) upgrading. So I cannot judge those. As it stands now, for $3.99, with so few options, I find this lacking, and not worth it. Others may feel that this is something that may be worth it, especially given the cost of Midjourney or other AI platforms.

Post edited by LiliumVA on
«1

Comments

  • TheKDTheKD Posts: 2,691

    In your examples, I think the dazai one looks way better IMO

  • LiliumVALiliumVA Posts: 17
    edited April 4

    TheKD said:

    In your examples, I think the dazai one looks way better IMO

    How? Can you explain. As mentioned, the first image is something that can be done with Daz on it's own. The second, the face is entirely warped as if it's playdough.Not to mention the hands , leg and foot and overall body shape of the Daz one is misshapen.

    Post edited by LiliumVA on
  • TheKDTheKD Posts: 2,691
    edited April 4

    I just mean a A vs B comparison, I think dazai looks better. The fisrt one, the skin on the daz one looks more like a real person, while the second one looks like an amateur touchup photo from the 90's, when people used to smudge brushed away all the details. The second one, both faces are not that good, but dazai didn't cheat and add hide hands into the prompt, and actually seems to have the correct number of digits lol. I think I do like the midjourney dress a bit better though, but it doesn't scream futuristic to me.

    Post edited by TheKD on
  • LiliumVALiliumVA Posts: 17

    TheKD said:

    I just mean a A vs B comparison, I think dazai looks better. The fisrt one, the skin on the daz one looks more like a real person, while the second one looks like an amateur touchup photo from the 90's, when people used to smudge brushed away all the details. The second one, both faces are not that good, but dazai didn't cheat and add hide hands into the prompt, and actually seems to have the correct number of digits lol. I think I do like the midjourney dress a bit better though, but it doesn't scream futuristic to me.

    I think on the grand scheme of things, the only reason that the first Daz one looks better with 'realisim' is because Victoria was used as a base,so a normal map would have been engrained in that, and the Midjourney has no reference. That said, the quams you have with Midjourney is something that they have corrected over time with the upgrades. SD still has major issue with realistic skin.

    For examle the prompt "a gorgeous 20 year old model with long black hair , green eyes, freckled face" used with both Daz AI and Midjourney shows that Daz creates that 'glamour model' affect from the 80's with the overly soft skin.The resulting images of two attempts(16 generations) presented all soft skin. All 4 generations for Midjourney resulted in realistic results.

    Daz AI

     

    Midjourney

     

     

     

  • xyer0xyer0 Posts: 5,909

    What would happen if you added "photorealistic" to the prompt?

  • MasterstrokeMasterstroke Posts: 1,970

    xyer0 said:

    What would happen if you added "photorealistic" to the prompt?

    That would be the only prompt I'd be interested in. Maybe as a post filter in the render settings. Nothing generative, just refining it to photo real and some film stocks.

  • LiliumVALiliumVA Posts: 17
    edited April 4

    xyer0 said:

    What would happen if you added "photorealistic" to the prompt?

    Prompt :"Photograph of a gorgeous 20 year old model with long black hair , green eyes, freckled face, photorealistic, 8k, taken with Canon R5"

    Generated with Victoria  :

     

    The 8 generations with Victoria:

     

    Generation without Victoria:

    '

     

    The 8 generations without Victoria:

     

    The biggest issue with this when it comes to realism, is that Daz models inherently have normal maps ingrained within them. So using said model for the prompt does up the chance of 'realisim' in the results. Like with the first one, it looks really nice, but it can be done IN Daz Studio already. However, with more intracate prompts, realisim is tossed out due to how SD struggles with it. The learning process that they have used is very different than that of Midjourney, or any other AI platform. That said, SDXL and SDXL Turbo looks somewhat better, but looking at what Daz AI is using, it's probably old SD. Even then, those still suffer from the whole 'glamour shoot' style skin blurring.

    There's also the issue of not being able to dictate the aspect ratio, that the AI does not follow the prompt, even with strictness to include a 'full body' shot, and most of these are on plain backgrounds.

     

     

    *edit* I want to also show the 4 results from Midjourney for the 'freckle' one that was generated so you can see how realistic the results are.

     

    I also want to add when it comes to AI, details are important, thus why the 'taken with Canon R5' etc.. was added. to the other examples. When I do Midjourney it usually is prompts with taken with X camera, 8k, 100 ISO, 85mm ( or whatever focal length I want to use) etc... simply because of the results wanted.

    Post edited by LiliumVA on
  • alienareaalienarea Posts: 526

    As it is now, it is for the users who don't want to spend hours in setting up scenes, lights and render time but want quick pictures to post somewhere.

    Is the dark side stronger? Quicker it is, but not stronger (paraphrased from the little green guy).

  • xyer0xyer0 Posts: 5,909

    LiliumVA said:

    xyer0 said:

    What would happen if you added "photorealistic" to the prompt?

    Prompt :"Photograph of a gorgeous 20 year old model with long black hair , green eyes, freckled face, photorealistic, 8k, taken with Canon R5"

    Generated with Victoria  :

     

    The 8 generations with Victoria:

     

    Generation without Victoria:

    '

     

    The 8 generations without Victoria:

     

    The biggest issue with this when it comes to realism, is that Daz models inherently have normal maps ingrained within them. So using said model for the prompt does up the chance of 'realisim' in the results. Like with the first one, it looks really nice, but it can be done IN Daz Studio already. However, with more intracate prompts, realisim is tossed out due to how SD struggles with it. The learning process that they have used is very different than that of Midjourney, or any other AI platform. That said, SDXL and SDXL Turbo looks somewhat better, but looking at what Daz AI is using, it's probably old SD. Even then, those still suffer from the whole 'glamour shoot' style skin blurring.

    There's also the issue of not being able to dictate the aspect ratio, that the AI does not follow the prompt, even with strictness to include a 'full body' shot, and most of these are on plain backgrounds.

     

     

    *edit* I want to also show the 4 results from Midjourney for the 'freckle' one that was generated so you can see how realistic the results are.

     

    I also want to add when it comes to AI, details are important, thus why the 'taken with Canon R5' etc.. was added. to the other examples. When I do Midjourney it usually is prompts with taken with X camera, 8k, 100 ISO, 85mm ( or whatever focal length I want to use) etc... simply because of the results wanted.

    @LiliumVA Thanks so much for the detailed response. 

  • Richard HaseltineRichard Haseltine Posts: 100,564

    I'm not sure why people are getting blank backgrounds, all of my attempts have had some kind of environment (but I usually go for so-and-so doing something, possibly adding an in, which may help)

  • LiliumVALiliumVA Posts: 17
    edited April 4

    Richard Haseltine said:

    I'm not sure why people are getting blank backgrounds, all of my attempts have had some kind of environment (but I usually go for so-and-so doing something, possibly adding an in, which may help)

    So, when creating prompts for this one, since you cannot do some of what I mentioned prior, if you stray away from basics it starts to warp, and become disfigured. Because you're straying away from the inital forumula of a basic Daz model, or Daz related product, and more into SD, it does what a lot of AI platforms do, it becomes screwy lol. If you look at all of the 'perfect' Daz ones, they are usually shoulders up shots.

     

    Example, the first prompt "Gorgeous elf female with chesnut updo hair and blue eyes wearing a black sparkly dress"

    The generations :

    The only non-grey generation:

     

    There is a lot of warping around the eyes, and from a distance it looks normal. It's something that could be fixed in-post, but isn't the purpose to have this new AI to create things we may not be able to in Daz?

    So I added to the above prompt "Gorgeous elf female with chesnut updo hair and blue eyes wearing a black sparkly dress dancing in a ballroom lit by candles"

    The 8 generations from the expanded prompt:

    The results:

     

     

    Every single one of the expanded prompt's results in malformations of body parts, and the face. Even the background elements in some of them are warped or just... off. So it does beg the question of "What's the point?" when the results with SD, as the AI platform used by Daz, what makes it beneficial for consumers. Mentioned previously, it's supposed to make it easier, and yet it's not. It's not simply because the platform as it stands now, is producing results that are all together wrong. So, a user will have to go in and edit the work in PS, since you can't inpaint with Daz AI right now.

    Even grey'd backgrounds, there is still the process of editing it all, matching lighting and what have you. This AI could be used for backgrounds, but it still comes down to what makes this one more beneficial to consumers compared to other AI platforms.

     

     

    Post edited by LiliumVA on
  • XelloszXellosz Posts: 742

    Well, out of fun I have tried a few prompts and had a good laugh.

    DAZ is lightyears behind even the smaller AI sites and Local AI generation. 

    example prompt: "easter party, drinking, and dancing. men are sprinkling water from a bucket on women. (Easter Monday, known as “Ducking Monday, or “Dousing Day” is a unique Hungarian tradition where boys visit the homes of girls and sprinkle water on them. According to the tradition, the water has a cleansing effect and brings good health and beauty to the girls) -auto on"

     

    I'm getting 10-20+ free generation /day on other sites.

    Where I usually play around, testing the current limits (just out of fun), I got 500 free generations/day as a TOP Creator without paying a $.

    Locally I could do as many as I want. 

     

    I'm waiting for Nvidia and Unreal to come out with something that could work. Daz chose the easy way to try to do AI art generation and because of that, they will have problems with their DAZ3d products and they are a year behind most of the Ai art generation sites.

     

    Good luck, I'll check it  1/2 years from now. 

  • ArtiniArtini Posts: 9,445

    I do not like Midjourney interface, Daz AI is much better in my opinion.

     

  • linvanchenelinvanchene Posts: 1,382
    edited April 5

    In the current beta version of Daz AI Studio an Upscaler is not yet implemented.

    The upscaler does not only increase the resolution of the image by a factor of 2x or 4x but can also add or remove (skin) details.

    I used the  upscale model 

    4x_foolhardy_Remacri

    for most Stable Diffusion images featuring characters

    Some examples showing what can be achieved with custom Stable Diffusion XL workflows using licensed Daz 3D characters as img2img source:

    Genghis Khan 9 HD (Daz 3D store promo image)

    Stable Diffusion XL with checkpoint Copax TimeLessXL:

    Giana HD with Expressions for Genesis 9 (Daz 3D store promo image)

    Stable Diffusion XL with checkpoint epicrealismxl:

    KOO Virgil HD for Genesis 9 (Daz 3D store promo image)

    Stable Diffusion XL with checkpoint Copax TimeLessXL:

    ###

    My current guess is that Daz 3D trained a custom "checkpoint" for Stable Diffusion XL.

    Michael and Victoria could be "LoRA" (Low Rank Adaptions) trained with a few images.

    The possibilities of Daz AI Studio will depend on which additional features Daz 3D makes available to their customers.

    - choosing different upscalers,

    - choosing advanced layouts that use two KSamplers instead of just one to generate additional details but take longer to calculate

    - choosing different samplers, schedulers, etc.

    - create an indivudial UI with custom nodes

    ###

    My personal preferences would be to

    - pay Daz3D a license fee to use their trained checkpoint and LoRA.

    use them

    - in an offline version of ComfyUI

    or

    - a dedicated offline version of Daz AI Studio.

    In any case I am happy that Daz 3D choose Stability AI as partner. smiley

    piccadilly_shopping_602992849096746_ReA_00001_1755x2160 mq.jpg
    1755 x 2160 - 240K
    cruise_portrait_genghis_644034656468980_00001_ 1755x2160 mq.jpg
    1755 x 2160 - 387K
    magic-shop_castle-tower_virgil_898058615206045_00001_ mq 1755x2160.jpg
    1755 x 2160 - 256K
    Post edited by linvanchene on
  • FantastArtFantastArt Posts: 311

    Xellosz said:

    Well, out of fun I have tried a few prompts and had a good laugh.

    DAZ is lightyears behind even the smaller AI sites and Local AI generation. 

    example prompt: "easter party, drinking, and dancing. men are sprinkling water from a bucket on women. (Easter Monday, known as “Ducking Monday, or “Dousing Day” is a unique Hungarian tradition where boys visit the homes of girls and sprinkle water on them. According to the tradition, the water has a cleansing effect and brings good health and beauty to the girls) -auto on"

     

    I'm getting 10-20+ free generation /day on other sites.

    Where I usually play around, testing the current limits (just out of fun), I got 500 free generations/day as a TOP Creator without paying a $.

    Locally I could do as many as I want. 

     

    I'm waiting for Nvidia and Unreal to come out with something that could work. Daz chose the easy way to try to do AI art generation and because of that, they will have problems with their DAZ3d products and they are a year behind most of the Ai art generation sites.

     

    Good luck, I'll check it  1/2 years from now. 

    totally agree

  • WonderlandWonderland Posts: 6,855
    edited April 5

    A beautiful woman with black flowing hair wearing a white dress looks out over the ocean on a moonlit night. Midjourney is much more creative! All four Daz images look almost identical with plastic skin and unclear face and blurry ocean. 

    I'm now wondering if Midjourney is actually learning from ME. I usually request artistic, non-realistic images and this time I didn't and it gave them to me anyway. 

    Daz

     

    Midjourney 


     

     

    IMG_1301.png
    2048 x 2732 - 5M
    IMG_1307.jpeg
    800 x 1200 - 240K
    IMG_1311.jpeg
    750 x 1125 - 245K
    Post edited by Wonderland on
  • WonderlandWonderland Posts: 6,855
    edited April 5

    TheKD said:

    I just mean a A vs B comparison, I think dazai looks better. The fisrt one, the skin on the daz one looks more like a real person, while the second one looks like an amateur touchup photo from the 90's, when people used to smudge brushed away all the details. The second one, both faces are not that good, but dazai didn't cheat and add hide hands into the prompt, and actually seems to have the correct number of digits lol. I think I do like the midjourney dress a bit better though, but it doesn't scream futuristic to me.

    Did you zoom in? 
     


     

     

    IMG_1308.png
    2732 x 2048 - 6M
    Post edited by Wonderland on
  • Daz AI's results are definitely repetitive compared to MidJourney.  I have gotten nearly duplicate images a few times now.  Even with tweaking the prompt slightly, I've gotten the exact same pose, down to the twisted three-fingered hand, several times in a row.  The entire image was almost entirely the same, but for minor differences in the clothing and hair.  But very minor.

  • FirstBastionFirstBastion Posts: 7,760

    Wonderland said:

    A beautiful woman with black flowing hair wearing a white dress looks out over the ocean on a moonlit night. Midjourney is much more creative! All four Daz images look almost identical with plastic skin and unclear face and blurry ocean. 

    I'm now wondering if Midjourney is actually learning from ME. I usually request artistic, non-realistic images and this time I didn't and it gave them to me anyway. 

    Daz

     

    Midjourney 


     

     

     It is well established that midjourney did its training scraping 5 billion images off the net,  infringing on thousands of the best artists in the world,  then regurgitating those styles and techniques from those artist to produce the midjourney generative output. 

    Daz AI generation is based on the 3D renders that Daz created or legally purchased and owns.  It makes sense that Daz's AI generative output will look like daz studio 3D renders. 

    that's the difference. Daz followed the rules.

     

  • LiliumVALiliumVA Posts: 17
    edited April 6

    FirstBastion said:

     

     It is well established that midjourney did its training scraping 5 billion images off the net,  infringing on thousands of the best artists in the world,  then regurgitating those styles and techniques from those artist to produce the midjourney generative output. 

    Daz AI generation is based on the 3D renders that Daz created or legally purchased and owns.  It makes sense that Daz's AI generative output will look like daz studio 3D renders. 

    that's the difference. Daz followed the rules.

     

     So, Stability AI, which Daz has paired with for this AI platform, is under litigation due to using work from artists to train(billions of copyright protected work) their model. The probelm is that, outside of using the two models currently used, it is still using Stability, thus giving the results many are getting. It's not all content created by Daz, it is still inherinetly Stability. They're being sued by over a dozen artists, and Getty, for copyright infringement. I'm not saying Midjourney is somehow better in regards to copyright infringement, no, I am, however, establishing that Stability AI is also an infringing company that Daz is choosing to partner with.

    Post edited by LiliumVA on
  • IceCrMnIceCrMn Posts: 2,129

    I've not used either one.

    But from the images shown here; I'm not seeing a big difference.

    Both seem to produce roughly the  same "quality". By "Quality" I mean the same number of visually discernible mistakes.

    All of them still struggle with hands, eyes, feet, and other attributes of human physiology.

    To be very honest, if they weren't labeled, I wouldn't know which was produced by which AI but I can easily tell all of them were made by AI.

     

    I do appreciate that Daz is following the rules for training.

    So I will give Daz the passing grade simply for that fact.

     

     

    ...also

    I'm certain that both "learn" what is "correct" by remembering which outputs the user selected to keep.

    If they both have a user account involved (or any other user ID method even if it's not an account), then I sure that specific user will start to see a certain "style" to the outputs.

  • lilweeplilweep Posts: 2,476

    daz's stable diffusion implementation isnt competing with midjourney, it's competing with places like https://creator.nightcafe.studio/

  • bohemian3bohemian3 Posts: 1,034
    edited October 19

    Once DAZ AI Studio has the ability to inpaint and do image to image I'll take a closer look at DAZ AI Studio.  When it can easily produce consistent custom characters across images and style, it has potential and could become very useful.

     

     

     

    Post edited by bohemian3 on
  • xlpasstestxlpasstest Posts: 48
    edited October 19

    How did you set up the lighting for this picture? She looks absolutely stunning.

    Post edited by xlpasstest on
  • FirstBastionFirstBastion Posts: 7,760

    Daz AI did a pretty good job on this one too. 

  • DiomedeDiomede Posts: 15,158

    bohemian3 said:

    Once DAZ AI Studio has the ability to inpaint and do image to image I'll take a closer look at DAZ AI Studio.  When it can easily produce consistent custom characters across images and style, it has potential and could become very useful.

    +1

  • FSMCDesignsFSMCDesigns Posts: 12,749

    I am really surprised DAZ hasn't implemented options like image to image and inpainting by now. There are new AI sites and apps popping up almost weekly and now Flux and Flux.1 are the hot new thing.

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,179

    I just do it on my own PC in Fooocus

  • FSMCDesignsFSMCDesigns Posts: 12,749

    WendyLuvsCatz said:

    I just do it on my own PC in Fooocus

    Same, but we are the exception. Since so many here are clueless when it comes to AI and many won't even d/l 3rd party addons because they have no clue on how to install them, or let smart content do all the work, they probably won't touch AI unless DAZ implements it along with features like image to image and inpainting. It's a safe bet that the majority of users only contact with AI is DAZ AI Studio which would be sad and akin to a persons only contact with nature would be the plants in their backyard, LOL

    I wonder if the reason DAZ haven't implemented these features is due to so few users using the DAZ AI studio. On another forum I post on there are AI art sections and 3D art sections and many that post in the 3D art sections never even visit the AI section

  • IceCrMnIceCrMn Posts: 2,129

    To be fair most of us probably have never even heard of fooocus.

    I hadn't until I read this post.

    So far it's downloaded over 12GB of data

    This will be a concern for those that have data caps.

Sign In or Register to comment.