Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Your entire narrative here is predicated on the false assumption that people "can't render stuff [they] don't have in Daz Studio".
You have confused Daz Studio and the Daz Store. Daz studio is the environment where renders come from. I was referencing that if you do NOT have an item at your disposal- that it is it analogous to the AI NOT having that data(set) to "render" your request. Please re-read what I wrote again and see if it makes any more sense now. Importing an item into the Daz Studio means it still arrives there- doesn't matter where you got it from.
And to be honest, if you're generating things from scratch, we've just moved into the generative realm of AI usage - which most people who are not firmly for or against can accept as a nuanced use of AI.
One use of AI is as a tool and the other is a creating the foundartion of your work with AI.
some people will mistake your work for AI" is that people can be wrong about what they think they see
Can the number of people who have seen or experienced their 3D/Daz- Studio powered/rendered/CGI work mistaken for AI- please raise their hands.
Daz, the company itself, was bombed on their own ads because people thought the art in the adwork was AI-generated.
I don't need to argue this, we are all living it.
Please keep the discussion on the topic, don't let it become personal.
While true in theory if you know what you are doing and what AI assets to use, you can get the results you want much faster and usually in a better quality that what DS will provide.
lately, I think of a scene i want to create, either in AI or DS, I usually start it in AI and while it can be a bit furustrating to get close to what I want in the beginning at times, the final results either by chance (multiple image generations) or with inpanting and specialized LoRAs, has proven to far exceed what I could do in DS. I create images daily and haven't opened DS in over a month when I used to open it daily.
Say I wanted to set up a scene and render an image of a female centaur in office attire, with clothes fitted over the centaur body like you like to post , in an office environment. This would take me over ah hour to set up in DS (unless i have a saved scene), especially since I probably don't have the tools to get clothing to fit over a centaur body like Matt does.
I open Fooocus (SDL) and type this prompt in "female centaur, horse body, female upper body, fitted business outfit, skirt on the horse body, blond updo, glasses, in an office, ultra realistic, ultra detailed, full body shot,". I load the centaur LoRA I have and any lighting, detail or style LoRA I also want. I set it for a batch of 4 images to generate at a time. This was my second image generated, took all of 5 minutes. I can further refine it by adding other LoRAs, using different checkpoints, or even existing images to refine the pose or environment, or inpainting to give her a different face
The unspoken part is I don't have to purchase a centaur model, or the clothing, or the hair, or the environment
I look at inpainting as post work for AI. I hated postwork when it came to renders and using photoshop, but I LOVE using inpainting to fix, add, or change things in AI
There's a huge difference between loading assets created by an artist-that you purchased-and putting them together, posing, moving, placing lights, adjusting camera angles, etc, compared to entering prompts and rerolling the results until you get something you like. Daz Studio is where a creator is able to put a vision together using other assets, in addition to making the renders. The only true similarity with Daz Studio to AI is pulling those assets; but those are all based on choices, and with an artistic intent. For many of us, a great deal of work goes into the renders, too.
That work to put things together, in addition to other outlets like Matt Castle's references on sculpting and drawing, for example, are what come from our mind, body and soul to create something. And when we draw, write, or do anything creative, we learn from our mistakes to improve. And with AI, that just really isn't a thing. Yeah, you can clean up a prompt or keep trying until you get an image you want, but it just isn't the same experience-not even close. As a tool to improve images? Sure I suppose, if you made the original. But I will always prefer art that comes from the creative mind, not from just typing prompts.
The only true similarity with Daz Studio to AI is pulling those assets
Yeah, thank goodness- that's the exact point I made.
And when we draw, write, or do anything creative, we learn from our mistakes to improve. And with AI, that just really isn't a thing.
That actually is a thing and almost every AI website has a place to RANK the images or report the bad ones so it can better learn good results from bad results.
I mean, how much of the anti-AI argument is based on improper TRAINING and LEARNING from copyrighted images?
AI is, to many artists chagrin, using data from 'the best humans do' to better emulate artistic results. Does anyone remember when prompting an artist's name was a thing?
Don't we humans get better at stuff, the more we do it? By studying the works of others? It's the same system at play.
------------------------------
On another note, I've did a dive down the rabbit hole with the videos exposing all the traditional artists (now) using AI and pretending it's hand-drawn/painted/illustrated.
They even have fake time lapse videos for when someone says "show me your work".
That genuinely makes less sense.
I don't see how "Daz Studio can only render assets that are in Daz Studio" is a relevant statement to this discussion. It's self-evident, it's circular, and it's made redundant by a demonstration that, yes, people can and do make custom assets if they need.
Come on, that's hardly accurate to the prompt; that's not a horse wearing a skirt, that's a skirt vaguely draped around a horse's shoulders.
Generally, public venues in the western world typically consider people whose butts aren't covered to be improperly dressed.
This was the original context.
There is an incredible irony here as you've demonstrated exactly how AI works with the example of your image and *most likely* how you made it.
You opened Daz Studio and typed Centaur in you search window (There's your prompt)
----
I was talking about an exact image posted in this thread. If you and others want to talk about making items using Blender, Max, Unreal, etc - painting, drawing, sculpting, go ahead.
None of that is relevant to what I said. I spoke about an exact picture and an exact method.
And what was the point? Before being scolded for dangerous leanings, it's was about judging (the value of) art based on HOW IT WAS MADE.
OrangeFalcon said on the previous page "But I will always prefer art that comes from the creative mind, not from just typing prompts."
-------------
I never said using Daz was exactly or even close to promting with a generative AI engine.
What I said, again is the AI (I repeat THE AI) THE AI is performing a similar function by grabbing data it was trained on- to build its images/output.
The same way a Daz user, a Daz Studio user (not the store shopper) pulls items from their library of assets to make a render.
not touching the ethics arguement
but
if Matt_Castle was so inclined (I doubt he is), he could train a LoRa on his renders (for his own use and nobody else can see DAZ EULA) and as a result get images of Centaurs wearing bottoms quite consistently
we all could draw, paint etc dressed centaurs too and train a LoRa on our art but mine would probably look like stick figures
data in data out
I trained a LoRa on my selfies from my twenties and get fairly consistant images that look like photos of me at that age cosplaying almost anything I prompt for
While it is theoretically true that someone can create what ever they want with 3D, or more specifically for this conversation/forum, with DAZ Studio, however reality falls well short of theory. Unless someone takes the time (and has the skills/talent/ability) to become proficient with 3D modeling, texturing, and rigging, their imagination is limited to the products you can find in the store(s). I would venture to guess that the vast majority of DS users don't even have the knowledge/skills to import models from other formats and set up the textures properly for DS/Iray. So this limits their ability to explore new ideas beyond what is available for DS functionally impossible.
While the centar examples done with DS are quite impressive, they also show a great deal of skill that I would argue the average DAZ user does not have. To get more guided results from AI, you will need to have specific loras, and possibly custom checkpoints to help guide the AI to the result you want. Still, it's largely true if you want the image to look exactly like the image in your minds eye, it can be quite difficult. But this is also true with DS. Most users are limited to the content they can purchase, and possibly modify a bit with different materials (usually bought as well) and maybe a bit of kit bashing. So we typically end up settling for something that is "close enough", just like with AI image generation.
In general, the learning curve for AI Image generation is much much less than for DS/3D. The cost of AI image generation is much less than with DS, and in fact can be done for just the cost of electricity if the user has a decent computer. If the users' intent is to just make a pretty picture, then AI image generation is no doubt a very attractive option.
Something simple...
I wasn't talking about exact pictures, and I specifically allowed for some variation: "Stylistic differences would be fine and I'm not worried about the exact details of what's in the background (although it should retain the same feel), but other than that, if I wanted something close?"
The question wasn't "can AI replicate this exactly?". The question was "how close can AI get?"
But even breaking it down to the loosest version of the concept - parodying a 'car girl' pin-up by replacing the typical skinny young white woman in somewhat skimpy clothes with a somewhat chunky black centaur in similarly skimpy clothes - that is something that is like pulling teeth to get an AI model to do. Hell, even the basic concept of a regular 'car girl' pin-up largely escapes some models, despite being a well established genre. (I gave it a shot with a couple different models I have on my computer, but had to stack a heap of terms and a Lora to even have some generations that were on the car rather than inside the car).
I'll concede that DS isn't the most efficient medium to do this kind of scene, but it's obviously possble. And if I went to a more traditional commission artist and said "Draw me a centaur doing a pin-up pose on the hood of a car. She's wearing combat boots on all four hooves and two sets of short shorts - one around her horse shoulders much like they were still a human pelvis and the other around her hindquarters", then while I might not get this exactly, I think most competent illustrators would know roughly what I meant without further elaboration.
I am however really not sure that such a generation is possible with AI as it currently stands though; despite efforts on my part, I've not seen anyone manage it. Even trying to work with img2img or controlnet from existing renders, AI doesn't understand the inputs and actively wants to move *away* from what I'm feeding it.
I first learned about AI in March 2023, after launching my 3D art board that February, and completely shunned it. I spent the rest of the year defending my decision to ban it while learning more about the tech. The more I learned, the more my position began to soften. Then, I held a vote in October 2023 on whether to allow AI. Ten people voted and the results were split down the middle, leaving me to break the tie. I chose to keep AI banned, and found myself at odds with my decision for the next five months leading into last March, despite knowing, on an ethical level, that I made the right call.
There's an incredible video discussing the scope of the impact AI has had on the art industry. The host shares my position on AI, in that we're both fascinated and terrified by it. I especially liked his closing remarks:
A year ago he had a friendly debate with an Adobe Solutions Architect about the pros and cons of AI. Despite his own reservations, he mentions toward the end of the video having developed his own hybrid workflow using his existing knowledge as a graphic designer and a combination of AI tools like Midjourney, Photoshop Generative AI Fill, and a couple other tools.
He's not alone, either. Several weeks ago I started following a YouTuber who has a graphic design background and now does ComfyUI tutorials. Six months ago he shared his history as an artist, including his journey from traditional design to AI. It's an absolutely fascinating look into how one artist embraced AI and adapted it into his craft.
As hobbyists, there are artists like me who've been creating 3D art for over a decade. Some have embraced AI as a powerful new tool while others have shunned it. Someone on another board I frequent once reminded everyone that we create 3D art for fun, and now they're experimenting with AI to explore its limits. Someone else told me the ethical debate is a separate issue for them. Others have expressed the same point of view. I've learned how it works, and that there are valid arguments on both sides. People want to play with this shiny new tech and have fun with it. Whether enhancing their original work, or generating new images solely from prompts, they want to share and discuss their creations. I had to adapt if I wanted my board to stay relevant and be successful. Therefore, I began allowing AI last March, and have since embraced it myself as a tool that's proven incredibly useful for postworking my 3D renders, and generating images from scratch that would've been impossible with Daz.
In retrospect, what companies like Stability AI should've done was develop code that produces the same kind of results without needing to use existing publicly accessible works to train data with. That would've been truly impressive, and I believe would've mitigated the ethical debate. In any event, the cat's out, AI isn't going anywhere, and whether you're on board or not seems inconsequential in the grand scheme of things.
All of that being said, last year I had to make a few design changes with regard to how my board looks. One of those changes included adding a background image to the forum index. Since our primary medium is 3D art, I wanted to create something with Daz, and made a genuine effort to set up a forest render. Sadly, I couldn't produce anything satisfactory with the content I had, and I couldn't find anythning in the Daz store that would've helped me accomplish my goal. So, I turned to AI. To my incredible fortune, Flux produced exactly the kind of image I had in my on the very first generation of my text prompt. It's been my board's background ever since.
should have used Carrara or Bryce
I say that as someone who uses Ai a lot too
I could create a picture like that in Carrara fairly easily
It's Valentine's Day
No he is not.
yes it is! happy V-day in AI
I refuse to use AI, it's synthetic, artificial, fake, contrived stuff and it's not clear where it gets the resources to create what it does. But I admit that my work has dropped a lot since this stuff was put on the market
The same thing could be said about any medium that uses computers.
+1
The same thing could HAS be said about any EVERY medium that uses computers.
And here's more irony- it's been said about people who use Daz Studio (and in legacy terms, people who use[d] Poser).
Yeah, I prioritize mediums that don't use computers to contact the dead too...
Adding to that, there are artists here on Daz and other communities who can't even bring themselves to call themselves artists because they use Daz/Poser. They think the definition of 'artist' only applies to traditional artists.
Well, that's why we use Modifiers (or is it prefixes) or is that a descriptor?
Like Electronic Musician...
Digital Artist....
Ain't the program even called Digital Painter?
3D Artist...Graphic Artist....etc.
There are different kinds of artists. Daz most represents the style of a director where you're the one incorporating all the elements that makes the aptly named scene work. James Gunn is an incredible director and puts things together quite well, and without his style and vision, we wouldn't have the great movies and shows he has created. Painters aren't creating all the oils that are used for their individual works, they're putting it together themselves.
That's much like what we do here. Maybe we aren't creating the assets but we're utilizing them to a specific vision, and that human eye and element is important.
And with that said, the main difference between Daz and AI is with Daz we are selecting every element that makes the scene work to what we like. AI doesn't work like that, where you can easily make a prompt without specifications and random sources are pulled to generate it. Is that being an artist? I don't really think so, because one can use it without making specific creative choices that an artist does in order create something. You can use make even a generic, short prompt less than 10 words and keep rerolling it until and look at what it's doing.
No doubt sometime in the future they will be doing that.
Again, please discuss the topic in a civil manner. If this continues to feature copmments diected at posters not the topic it may have to be be locked.
I recently had eye surgery and it's been really hard to do much with DAZ Studio. The small fonts on the interface makes it really hard to read with both my eyes, but with only one eye it's even more difficult to read on my 17 in. laptop screen. So I messed around with Krita and the Krita AI plugin for a while to see if I could get anything I really liked.
I pretty much a total newb with AI image generation, but I'm happy with my first try to make something decent. At least for me, it wasn't as easy as people seem to think it is. It was an iterative process of starting of with a prompt, generating 4 images, refining/modifying the prompt to guide the AI in the direction in wanted to go, So it was almost 2 hours of generate, edit prompt, generate, edit prompt (you get the picture). I also used Different checkpoints along the way to see which one would give me results I wanted, The base image I settled on (the second image uploaded here) was generated using the Digital Drawing XL model (checkpoint). It gave me the rich colors I was looking for and a more stylistic character, but I wanted it to look more realistic. I used the JuggernautXL model to refine the image (used twice, one on the original, then on the refined model). I then saved the image, and brought it back in to Krita and used the AI plugin to enlarge the image 2X (original was 1080p).
The final step was to take the image into Gimp and clean up some distortions (mainly the ear around the ear ring, and do a bit of artistic enhancement (unsharp mask, vignette, lighting enhancement, etc.). With everything, the image took about 3.5 hours to create. I really wish I could have re-framed the image a bit better. Something that is so easy to do in DS, seems difficult with AI, but it could be I don't know what I'm doing, or maybe future updates will make it easier.
One thing I've noticed is that there is almost always something that "needs help" where the image generator seems to be a bit confused. But with a bit of work on Gimp (or Krita, Photoshop, etc.) it can be fixed. If you want a good, error free image, I think you will almost always have to do some manual editing of the image. It also seems that a bit of post work in your image editor is needed to get the most out of the image.
Final Image
Original image generated by the Digital Art XL checkpoint.
I am getting better at poking the AI in a direction I want and likewise the AI is getting better at being able to go in a direction the customer wants. It's like the Jetsons. Maybe one day, I'll have a Rosie the Robot Maid and instant trays of healthy food that tastes good.
another video using my young wendyluvscatz (Wendy25) LoRa
I only prompted Wendy25 a woman wearing Gunne Sax dresses walking through the garden and it dressed mid twenties me in exactly the dresses I wanted
a bit more work generating videos, interpolating, upscaling, having Riffusion compose and perform a song for me, lipsyncing in FaceFusion, editing in Blender video editor inbetween, as I do
but yeah, easier that using DAZ
I still bought almost $30 of stuff from DAZ today too
real photos for comparision
Wendy, that is really impressive!
I've got to admit that I got some what mesmerized by watching the hands and feet. The odd deformations create sort of a hypnotic effect while also being a bit disconcerting
Still, it's pretty amazing to recreate a 25 year old version of yourself!