Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
And yet another conversion from SD
With fingers appeared in images, I could not get very good results
and it also took significant number of attempts to get something nice.
My conclusion is, that one need to practice a lot, even if it is just a prompt
with text to image.
I like what you're doing, using your rendered poses in ControlNet. Sadly, I don't have the VRAM to use ControlNet, so I've had to get creative with IMG2IMG as in my above posts, basically producing a hybrid of my original render and the AI's interpretation of it, followed by some post-processing. My goal is to break further past the barrier between 3D rendering and photographic realism. Great work with ControlNet!
Thanks a lot, @RenderPretender
My graphics card has 8 GB of VRAM, so I also need to experiment with diiferent settings to get SD to work.
Your images are great and I like postprocessing on them.
@Cris Palomino
Actually I am using the 360 sky boxes more as as fake world environments much like a game engine does.
I am mostly doing NPR/Toon styles these days
so true HDR lighting is not really needed
Thank you. I'm working on another set of images right now, with the goal of honing a technique for getting a character's face to be reliably and consistently recognizable across a series of images.
any 360 panoramic image works as IBL in Carrara, Octane render, Poser and presumably 3Delight Uberenvironment, it's just iray and filament that seem to have issues with them
probably an unbiased render thing
I definitely get shadows and coloured lighting from the Blockadelabs Skydomes in Octane
( well did, am without that PC right now)
the AI is the music and singing (SongR AI and Meta's MusicGen)
the video itself entirely rendered in Carrara (OK a bit of Particle Illusion postwork added in Blender video editor)
Are those your lyrics for the potion shoppe? A nice little descriptiove story in there.
Music-wise, I liked the first version better, the second one had a few too many dissonant chords,
no, SongR AI wrote it from the prompt magic potion shop
Attached are a few more trial images that I have put through my experimental DAZ-through-SD workflow. One of the most frustrating aspects of AI/Stable Diffusion for me, even when using my DAZ renders as input/reference images, has been trying to produce consistently recognizable facial characteristics across image series. I used the technique of prompt scheduling for this set of four images, and I'm quite pleased with the result. Even to a very discerning viewer, I think there is no question that these images are of the same woman.
https://youtu.be/iAhqMzgiHVw
here cool video which talking about generating same character in SD with help of controlnet and some manual fix.
The video itself actually about preparing character sheet or data set for LORA training but I think we can use several trick from that video to generate same character with bunch of poses .( ehich we can reuse or copy pastee into another scenes)
You can start with DAZ preview or Filament render as controlnet sources or IMG2IMG guide
SongR +ScreamingBee
Loved that album! and cover
For the LenZmen video "Heartbeat" I ran Daz Studio Renders through Kaiber to get the morphing reels and also turned the characters into rappers using AI. Here are some of the images I used.
There's also some animated segments, rendered with Daz Studio Filament engine. I stitched everything together using Adobe Premiere.
juvesatriani,
Thanks for sharing that video! My first impressoins?
There are a number of software to master whose output cascades into one another. For someone new to generative AI art, there's a good amount of technical learning involved here. There's also a lot of room to make mistakes, but just the same there's room to develop one's own style and adapt the process to achieve one's unique goals.
Also, even with generative AI's assistance, the techniques here still require effort and judgement on the part of the user. This isn't just with the manual steps of cleaning the intermediate images, or organizing them into sheets; more importantly; it takes a trained eye to separate the good from the bad images the AI outputs. Clearly it takes a fair amount of patience to put together a good training set. As with anything to do with software, garbage in equals garbage out. The better the garbage man -- in this case, the better the artistic vision and the higher the artistic standards -- the better the overall output will be, so to speak.
Again, thanks for sharing the video.
3Diva, the video is worth looking into.
Cheers!
Regarding the video training for creating custom characters. Didn't Daz recently change its licensing for its content? Doesn't the new wording exclude incorporating Daz content in the training of AI? Or maybe I either misread that, or my memory is incorrect. I am posting this as a 'I am not sure,' not as a 'you can, or cannot, do something.' With that in mind, there may be, just may be, an issue with using Daz renders as part of the training for custom characters.
With that 'I am not sure' out of the way, thanks for posting the video. Looks like a great resource with or without input from Daz stuff. Could be combined with Daz stuff in post-processing in any case. I will need to watch it several times to get the steps.
certainly not as a Lora model for redistribution
for your own use would likely be the same as 2D renders
nonetheless I myself tend to use other AI images, selfie video and other real life footage/photos for OpenPose as otherwise why wouldn't I just be doing a render
I and I dare say most DAZ users want the opposite, OpenPose to 3D pose like Plask mocap does
a few of my video renders were used as OpenPose animations in ControlNet for this
although I cannot say it's at all coherent
I just love the clothes and wish I had this Victorian Gothic fantasy stuff for my DAZ people
(yes I have just about everything of that genre from the store already but these look nicer)
Neat! I hear what you're saying, some of the clothing (an in particular some sci fi uniforms) are great inspiration for people modelling clothing.
-- Walt Sterdan
Thank you for the suggestion! The video is really cool. I don't think my PC can handle model training though. I'm sure soon though, they'll probably make model training a lot more user friendly for lower end computers. :) At least I hope so!
Here are a before and after image I sent through Stable Diffusion. I think it turned out kinda cool, but replicating the results has proved ellusive.
I am looking through these renders mixed with AI and they are pretty cool.
But I have to admit I am overwhelmed with which AI engine to use, and was wondering if someone could help point me in the right direction.
I'm trying Dream Studio and the results are kind of sad, probably because I am lost with how to prompt and whatnot.
Can someone recommend a good AI engine to start with, and some sort of tutorial on how you can get good results from an uploaded render?
Thanks 8-)
This is the image that I am playing with .... (EDIT: Or maybe not, it's not uploading LOL)
OK let's see if this link works:
https://www.daz3d.com/gallery/user/6147139440738304?edit=albums#image=1301334
There is an AI based editing program at HumbleBundle.
Maybe that one will be easier to use.
StableDiffusion and Midjourney may disappear if AI regulations will be implemented
as proposed by Adobe (C2PA data metasystem).
Re-sharing info posted by wolf359 about his NPR/Anime project with AI elements. The links are here and here. Worth looking into.
Cheers!
From this Daz 3D render of https://www.daz3d.com/oso-cat-for-genesis-9
To these ai edited images - 01
No 02
No 03
I am so glad it didn't misinterpret your pokethrough as a happy boy cat
I have increased scalling on the shorts to 300%, I think, and still got pokethrough in Daz Studio.
AI treatment was much easier to handle and still got some guidance from DS render.
It is also possible to get digitigrade feet in ai, if one desire.