Stable Diffusion with ControlNet has a menu for loading rigged FBX, JSON, and similar models. But my custom toon Marx brothers do not appear to transfer correctly from Carrara FBX to Stable Diffusion.
Have you tried to transfer your Carrara FBX to Blender
and then export from Blender as FBX or any other format, that ControlNet accepts.
I am going via Blender when I need to export FBX models from Shape-E to Daz Studio, for example.
Awesome work and your thoughts about you experiments with Stable Diffusion.
I am also doing my experiments with AI for the same reasons.
A video using my trained Textual Inversion Embedding
a recent pic of me for comparisson
I added my embedding to try, the words to trigger it are myself, AND wendyvain,
beginning and end of prompt
it will sometimes work with just wendyvain in the prompt but for specific use something like "myself, a woman doing such and such or wearing such and such etc, wendyvain" should trigger it
it was trained on Dreamshaper7 specifically but I think any SD v1.5 model should work
Nice work, Wendy.
I have not tried to make my own Textual Inversion Embedding, yet.
A video using my trained Textual Inversion Embedding
a recent pic of me for comparisson
I added my embedding to try, the words to trigger it are myself, AND wendyvain,
beginning and end of prompt
it will sometimes work with just wendyvain in the prompt but for specific use something like "myself, a woman doing such and such or wearing such and such etc, wendyvain" should trigger it
it was trained on Dreamshaper7 specifically but I think any SD v1.5 model should work
That is awe-inspiring! Well done. I just did my first masking in image2image, and my first upscale. So I am still just taking my first steps. Unfortunately, I continue to get conflicts among extensions related to animation. The most recent conflict was with deforum.
Stable Diffusion with ControlNet has a menu for loading rigged FBX, JSON, and similar models. But my custom toon Marx brothers do not appear to transfer correctly from Carrara FBX to Stable Diffusion.
Have you tried to transfer your Carrara FBX to Blender
and then export from Blender as FBX or any other format, that ControlNet accepts.
I am going via Blender when I need to export FBX models from Shape-E to Daz Studio, for example.
Awesome work and your thoughts about you experiments with Stable Diffusion.
I am also doing my experiments with AI for the same reasons.
Thanks for the suggestion. I tried for a similar model some time ago but have forgotten the details. I think I had to look up FBX versions on the web and use a conversion, but I don't remember for sure. Given how crude my toon figures are (intentionally), I would probably be better off just transferring the mesh to Blender and rigging there. I have successfully done something similar with Mixamo.
well I have yet to try FBX with ControlNet, Deforum is fussier about ControlNet too, a few that work fine with Image2Image won't work for me in Deforum
my build being in a VOC container is different from before when I use A1111 standalone so again I am not much help
I have serious conflicts due to my hardware set up with my iGPU
A video using my trained Textual Inversion Embedding
a recent pic of me for comparisson
I added my embedding to try, the words to trigger it are myself, AND wendyvain,
beginning and end of prompt
it will sometimes work with just wendyvain in the prompt but for specific use something like "myself, a woman doing such and such or wearing such and such etc, wendyvain" should trigger it
it was trained on Dreamshaper7 specifically but I think any SD v1.5 model should work
this was fun to watch and listen to, Wendy. Well-done!
Great work, Artini. Daz Studio can probably render anything, but needs Blender or ZBrush or.... to do the sculpting or modeling. I did some more playing around. Here is yet another interpretation of the myth of George vs the Dragon, perhaps suitable for cover art. I started with decades old Poser figures in Carrara, and rendered a depth pass. Poser 7 Sydney and Simon. Maybe 10 - 15 minutes to throw together the Carrara scene, including customzing with the built in terrain modeler and plant modeler. I used the depth pass in Controlnet to help direct the way in which Stable Diffusion manipulated the image. I will attach the original Carrara render and its Carrara depth pass in addition to the Stable Diffusion output. Pretty sure Daz Studio could do the same as Carrara's depth pass. Canvasses I think is the term.
Hope to try it at some point. It seems unavoidable...
more or less my reasoning
I don't do commercial work, 3D is just a hobby for me so feel less guilty
I understand the artists point of view but I am not on one side or the other and resent being called a thief because I am exploring and educating myself on new technology
the minute I deprive someone of some income by my actions I will accept the thief label
I haven't
it's not black and white for me, it's a spectrum and I am in the middle or grey area on my feelings
I am sad DAZ PAs who I have even recently like in the last week bought products from are so angry about it
for them it seems a black and white issue but they created their models, I am just a user/customer and IMO not an artist either (me personally that is) I am a hobbyist that uses premade models by others
I don't steal 3D models, I don't upload other people's art
I have Greg Rutkowski's Witcher art in my desktop rotation (it came with my bought Witcher games and he was paid by them)
I have looked at it and my DAZ stuff and considered doing a render based on it, shoot me!
Years ago, I modeled and rigged some custom figures intending to make a comic strip or animation. Never finished the Brash Lonergan and Moxie Espinoza comic strip, but I still trot out the figures now and again. It is amazing what the AI programs can do. Here is an example. I posed my figures in Carrara, along with simple primitives, and rendered out a base render and a depth pass. In Stable Diffusion's Controlnet extension, I used the base render for the 'canny' function and the depth pass for the 'depth' function, setting each to half strength. I then entered some simple verbal prompts describing the scene. Here is my base render and the result. Going for the look of a 1940s newspaper comic strip, so this has too much modern anime influence despite my putting anime among the negative prompts, but still amazing to me. Find the base render and the depth pass attached.
I prompted for doing in comic style of the Flash Gordon creator. I would have to touch this up. For example, you can see from my base render (attached) that the colors of the uniform are reversed. Here is the verbal prompt for the man in the foreground. (1 boy) man blonde hair in foreground wearing uniform (red shirt, gold trim, blue pants) standing, pressing right hand against pillar, holding box of tools with left hand.
Using a different model makes a huge difference. Here is the result of using the same verbal prompts and the same input images but using a different model. The result for the figures is much closer to the 1940s style, but the sets and props are not as good. And where did that moon come from?
Comments
Have you tried to transfer your Carrara FBX to Blender
and then export from Blender as FBX or any other format, that ControlNet accepts.
I am going via Blender when I need to export FBX models from Shape-E to Daz Studio, for example.
Awesome work and your thoughts about you experiments with Stable Diffusion.
I am also doing my experiments with AI for the same reasons.
Nice work, Wendy.
I have not tried to make my own Textual Inversion Embedding, yet.
That is awe-inspiring! Well done. I just did my first masking in image2image, and my first upscale. So I am still just taking my first steps. Unfortunately, I continue to get conflicts among extensions related to animation. The most recent conflict was with deforum.
Thanks for the suggestion. I tried for a similar model some time ago but have forgotten the details. I think I had to look up FBX versions on the web and use a conversion, but I don't remember for sure. Given how crude my toon figures are (intentionally), I would probably be better off just transferring the mesh to Blender and rigging there. I have successfully done something similar with Mixamo.
well I have yet to try FBX with ControlNet, Deforum is fussier about ControlNet too, a few that work fine with Image2Image won't work for me in Deforum
my build being in a VOC container is different from before when I use A1111 standalone so again I am not much help
I have serious conflicts due to my hardware set up with my iGPU
Fooocus has now decided to stop working too
a Video of my AI likeness
The details in the video are amazing, Wendy.
I have found tutorials for making animations in ComfyUI.
Just need to find some time for make it.
thx
Nodes terrify me so nothing comfy about that UI
this was fun to watch and listen to, Wendy. Well-done!
thx Cris
Trying to figure out, how to recreate in Daz Studio something, like this...
... and of course The Pigs
Could not forget about The Cats...
DAZ Filament Render reimagined with Stable Diffusion
Music is also AI created using Sumo Chirp
you can see the 2 Filament renders used as shorts on my Madcatlady channel if curious
https://youtube.com/shorts/V5uZb9URi0g?si=H1R3YVWXcq8SldAC
https://youtube.com/shorts/GxhYCojnLhQ?si=5xxDyUfEdzkPCfXA
Great work, Artini. Daz Studio can probably render anything, but needs Blender or ZBrush or.... to do the sculpting or modeling. I did some more playing around. Here is yet another interpretation of the myth of George vs the Dragon, perhaps suitable for cover art. I started with decades old Poser figures in Carrara, and rendered a depth pass. Poser 7 Sydney and Simon. Maybe 10 - 15 minutes to throw together the Carrara scene, including customzing with the built in terrain modeler and plant modeler. I used the depth pass in Controlnet to help direct the way in which Stable Diffusion manipulated the image. I will attach the original Carrara render and its Carrara depth pass in addition to the Stable Diffusion output. Pretty sure Daz Studio could do the same as Carrara's depth pass. Canvasses I think is the term.
Another great creation, Diomede.
Looks like I need to learn much more...
A Stonemason ruin in Twinmotion, Zombie Diffused
I am in a similar boat.
Thanks for the kind words. You are lightyears ahead of me, making the kind words even more kind.
Custom Figures for Comic Strip
Years ago, I modeled and rigged some custom figures intending to make a comic strip or animation. Never finished the Brash Lonergan and Moxie Espinoza comic strip, but I still trot out the figures now and again. It is amazing what the AI programs can do. Here is an example. I posed my figures in Carrara, along with simple primitives, and rendered out a base render and a depth pass. In Stable Diffusion's Controlnet extension, I used the base render for the 'canny' function and the depth pass for the 'depth' function, setting each to half strength. I then entered some simple verbal prompts describing the scene. Here is my base render and the result. Going for the look of a 1940s newspaper comic strip, so this has too much modern anime influence despite my putting anime among the negative prompts, but still amazing to me. Find the base render and the depth pass attached.
I prompted for doing in comic style of the Flash Gordon creator. I would have to touch this up. For example, you can see from my base render (attached) that the colors of the uniform are reversed. Here is the verbal prompt for the man in the foreground. (1 boy) man blonde hair in foreground wearing uniform (red shirt, gold trim, blue pants) standing, pressing right hand against pillar, holding box of tools with left hand.
Result from AI
Using a different model makes a huge difference. Here is the result of using the same verbal prompts and the same input images but using a different model. The result for the figures is much closer to the 1940s style, but the sets and props are not as good. And where did that moon come from?
Nice experiments with depth pass.
Yes, different models, seeds values, LORAs gives different results.
Just trying to figure out the best workflow for me.
Making comics is another idea to try...
Two Lions...
Meerkats...
Wow! Very cool, @Artini
Thanks, Diomede.
Wish, I could create similar images from scratch using Daz Studio.
There is available https://www.daz3d.com/space-bash
so here is the image to inspire some possible creations with it, I think.
The other one would be cool to recreate in Daz Studio
Just one more...
@Artini. All three are very cool.