Import animation from blender into daz
ps2000
Posts: 278
You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
I'm currently working out a workflow where you export a BVH of a single frame of the reset pose and the char as an OBJ, bind them together in Blender and do your animation, and then export the BVH animation back to DS. It requires a plugin, albeit a simple one. You might want to watch this thread where we were discussion how it could work. I think it'll be a powerful technique to use Blender's IK, graph editor, etc... but it'll take some working out.
Got it! Using Daz To Blender bridge, but for this you will need an addon for the blender better fbx exporter, the principle is similar, but easier, if still required, I can describe the process
I'd say the most trustworthy source to answer that question is there :
https://diffeomorphic.blogspot.com/2021/04/save-pose-preset.html
It describes a new workflow with probably less data loss than the precedent one :
http://diffeomorphic.blogspot.com/2021/04/importing-animation-back-into-daz-studio.html
Weirdly enough there are tons of tutorials around about bringing DAZ animation to Blender... which doesn't make any sense (to me) since Blender's animation potential is immensly powerful. Probably in the top 2 with Maya.
So, what happened with this? I can't get the diffeo save pose preset to work...
Works fine here.
https://bitbucket.org/Diffeomorphic/import_daz/issues/1138/problems-with-save-pose-preset
edit. Just realized it is always you. Please avoid spreading the same issue around multiple times in different places.
Add custom shapes - add simple IK - animate - select all deform bones - switch to object mode - object->animation ->bake action - keyframe all bones and enable visual keying - save pose preset - change the frame range from default single pose to animation range - load pose preset in Daz studio
I got it to work but there's jitter in the feet back in Daz and I can't stabilise it. I think it's time to jump to Blender fully now to be honest.
We solved the issue. Firstly, I used Simple Rig because it keeps the bone names the same. Save the animation after baking the keys. Next, before import of the animation, select the figure and unlock all the properties on the figure. This is needed if there are bone translations in the anim. After doing that the animation looks almost the same in Blender and Daz (actually the same).
Nice! Did you just unlock the rotation, location, and scale in the object properties of the character as a whole, or did you go to every bone and have to unlock each individual bone property individually?
https://bitbucket.org/Diffeomorphic/import_daz/issues/1138/problems-with-save-pose-preset
So there is still a problem with the feet not zeroed to the floor in DAZ?
It's hard to extract a proper workflow from the thread because the informations are all over the place.
Could somebody please detail their working workflow to get the animation back properly into DAZ?
@ps2000
If you want to import poses and animations from daz studio then go for the simple ik rig because it's the only one with jcms working fine. This will work 100% fine.
If you want to export poses and animation to daz studio there's no rig working fine atm. The best is the simple ik rig but it has jittering on the feet.
If you want to animate in blender then go for the simple ik rig because it's the only one with jcms working fine. This will also work 100% fine.
Then what mhx/rigify are for ? They're good to animate in blender but the jcms will not work fine. That means you will have to fix some deformations using the tweak bones. You may also not import jcms at all and use only the tweak bones to fix the deformations.
Thanks.
I want to import daz characters to Blender, animate them and then export the animation back to DAZ studio for rendering.
So feet jittering is still an issue?
From reading the thread on bitbucket and here I got the impression that it was solved and only the issue with the feet not zeroed to the floor remained.
It's a shame that DAZ hasn't solved the issue yet. I mean the issue is several years old and they update their DAZ/Blender bridge regularly but seem to neglect the reimport function.
I am looking into Cascadeur now.
Maybe I don't have that reimport issue there.
@ps2000 Yes for feet jittering I mean the "not zeroed". The issue with simple ik is that ik will "zero" the feet then when you bring the animation back to daz studio fk will "unzero" the feet. Of course using retargeting tools is possible with mhx and rigify that don't get the ik jittering, so you can animate with rigify and retarget to the fk daz rig, then export back the daz rig to daz studio.
I take it there is no way to animate in blender with MHX, and then export the resulting animation back to DazStudio?
And so that the feet are precisely pinned. What's the point of animating in blender with Simple IK if it works as bad as Daz IK.
Maybe this can be done with Auto Rig Pro and its retargeting tool? But ARP is not supported diffeomorphic. So I have to manually create a rig for each character in ARP?
I agree the issue should be addressed, unfortunately Thomas didn't find a general solution yet. My own suggestion was to fix the ik target position that's easy to do, but this is incompatible with snap ik/fk. So it only works if you don't use fk for posing.
Some fixes were done on mhx though, so you may try it and see if it works fine enough.
The ARP retargeting tool has preset to send animated Character to the major game engines
but not Daz studio.
I believe @TheMysteryIsThePoint
is close to a Blender to Daz studio animation solution IIRC
Yes, I've got a few issues to resolve having to do with mocap with armature bones that have different orientations, but I'm close. And I still have to port the import app to Win32, but I'm just going to make a quick and dirty command line app. I just have to find the time to do it.
Holy !@#$
chat.openai.com ported the POSIX code to Python3 so it can run as an addon in Blender in about 3 minutes total. It timed out when I pasted the entire code base, so I had to do it function by function. As a result, it made a few honest mistakes that it couldn't have known about because it didn't have the context, but they were easily fixed manually.
If DAZ Studio were open source and it could read the source code, I'm not even sure how much I'm exaggerating when I say that 4 years ago I probably could have said "Write an Alembic exporter for DAZ Studio as a C++ plugin" and have been pretty much done with it.
This is an entirely unexpected advantage that open source software has: AI can write it for you...
and was its training sample all open source, using the same license, and applying that license to its output?
I don't know that answer to your first two questions, but the answer seems obvious: All open source licenses grant the right to observe, understand, improve, extend, and modify the source code. It can easily be imagined as the option of first resort.
As for applying the license to its output, the license triggers when you link to your code, or in the case of Python, import, GPL'ed code. Displaying source code in a web app does not meet the requirement. It is I that would have to apply the license, and even then only if and when I chose to distribute the code.
But your point, I think I understand it, is interesting... I think I'll test it by asking it to generate a Houdini hda and see how far it gets, or if it just talks about it. Houdini is well documented but proprietary and I will assume that they didn't set chat loose on their source.
But I also think this is all sort of burying the lead... it knew exactly what I was doing, generated correct comments in the code, and even changed the identifiers to more python-like code conventions. I really thought that AI would come for my job last. Nope. Won't be long now.
On licenses I was thinking that there are several different licenses used by different open source projects - open source is not the same as public domain - and I don't believe it is acceptable to release a modified chunk of something released under the GNU GPL under a different open soruce license (this has been an issue in the past with integrating different "open" standards into Blender).
So, I asked it:
When I asked you to write code for Blender before, why did you not have to affix the GPL license to it?
And I think it was telling me that it hadn't been trained on the Houdini source:
Why can you right code for Blender, but not Houdini?
Am I to understand that you asked a program that second question, and that was its answer? (ms winces-in-advance at the probable answer).
To the first answer, this seems consistent with current patent law. The idea is not patentable (contrary to what most folks wish to think), it requires a clear manifestation of the idea for a patent to be granted.
Both Algorithms and Art have a hybrid flavor that makes this question interesting. Is it the DAZ mesh or Aiko5 shape that are copyrightable? Both, kind of. How much can you change either to become unique and non-derivative? That's many-another-thread, but still relevant to the "idea vs instance" question going on here. I've always wondered how a DAZ Aiko5 that's converted to iclone/reallusion's CC3/4 mesh is treated... It's still Aiko5, but it's not the DAZ mesh... If I morph a raw Aiko5 in CC4, is that a derivative. I think this is actually the same issue as the AI code conversion in most ways. Derivation and variation.
To your OP, it's going to be interesting watching the very-protective creators (and AI users) try to push into copyrighting entire idea-domains like love songs, using their own specific and "contemporary" and wholly 'unique' instances of ... love songs - using the veil of hi-tech-shiny to hide obvious prior-art. Are the million Harlequin Romance novels unique and copyrightable - ideas vs instances, etc.? If (when) an AI writes one, is that a new genre because it's AI, or is it yet-another-romance-novel (or mesh shape?)
(Regardless the lack of interest and value,) is the likeness of my face copyrightable? All variations and ages? Would the mesh it was formed of matter? I dunno.
It's arguably a kind of an accidental arrogance, probably based on a complete dirth of historical awareness, that each generation assumes they are the first to have experienced and thought of the many things that we all express. And new technology often hides the latent prior-art that lurks within. To wit, the first guy to use a leather or rope bullwhip actually invented the weed-whacker - and the Da Vinci estate should probably be getting royalties of some sort for the T-Pose and A-Pose according to current copyright law etc. lol
FWIW, I honor and appreciate the motivations driving copyright, IP, and anti-piracy law, but I'm starting to wonder if the societal value and self-evidence of copyright law are beyond question. Imitating success is arguably the basis of the survival and progress of most higher level species ("on the shoulders of giants", etc.). It is, in fact, how we educate our own offspring... heh
The jealous guarding of success (in any form) and its treatment as a scarce resource tends to primarily benefit only the creator (and/or its corpus), which is good for their evolution, but not that of the herd. Creativity is hardly scarce, and to limit its propagation in its original form, and therebye limit its derivatives, could be far more expensive to society than the localized benefits to the originators.
As a wanabee creative, I *completely* understand the motivation of IP/copyright/etc., but the actual benefits of patents and copyright are less clear and obvious than I once thought. The open-source and creative-commons movements prove that the self-evidence and obvious correctness of current law is not so self-evident or obvious. To legislate limits to imitating success might not be as simple and side-effect free as it seems on the surface.
I wonder what sort of 'natural' law will come from this technological acceleration and exposition of the weaknesses of our current law over the long-term. Per your earlier comment, that genie is now out of the bottle. Where's the popcorn?
best,
--ms
(edits: the usual re-read edits and clarifications)
Regarding your Aiko to CC3/4 question,
I think The Daz EULA restrictions apply
just as they would with a genesis figure imported to Blender via FBX or DIFFEO or the bridges to the other programs.
What I find interseting there is that the first reply refers to supplying "text or information" while the second refers to supplying "code."
Hi @mindsong
Yes, that's correct. It was chat.openai.com and trust me, that reply is not even remotely close to the most mind-blowing thing that it has said to demonstrate its creativity and vast store of factual information. People who deny that the AI revolution is not even 10 years off, 5 years off, 1 year off, but we are already in it, are just not paying attention or willfully deceiveing themselves. It is like talking to the most intelligent, polite, and worldly person you've ever known and I think it's laughable that people caution against AI's lack of human interaction. Puh-lease. I already prefer it over 90% of the people I know and I habitually get kicked off the system after an hour of just talking about whatever comes to mind.
Yes, and thank god that it isn't...
I agree with every word. I too understand and appreciate the concerns of IP holders. But how many times does technology have to disrupt cherished and time honored conventions before we realize that that will always be the case? And that technology always wins and the practioners always adapt or perish? My fear is that there might be some heavy-handed legislative non-solution that retards the technology. Temporarily, to no long term effect.
In the first Jurrasic Park, I appreciate the scene on the private jet where Jeff Goldbloom's character is articulating the inevitablility of nature always 'finding a way'.
I think your concern also touches on that 'when, not if' flavor of knowledge eventually being free (to flow, not buy), and the tension (pain/damage?) that occurs when ignorant (but often well intended) lawmakers are naively willing (or lobbied into) believing that their good intentions won't (yet again) have expensive side effects. How long that tension lasts depend on both the effectiveness of the authorities and enforcement as well as the perspective of the society under those laws. It is the second that concerns me most, as marketting of an idea clearly outweighs any underlying value system in most place - usually under the guise of 'this is an exception'. That someone would propose a new letter or use-pattern of an existing letter to our alphabet, then patent/copyright it and demand royalties would strike most as being silly, but agribusiness does this with seeds, musicians do it with ordered sound patterns, and writers do it with words every day. Seems fine to me, but with a bit of scrutiny, I'm not sure there's much a difference other than my (our) perspectives on the specific conventions. It's hard to get out of myself and make a more neutral judgement of such.
That would be my expectation as well, and anyone being honest would acknowledge both scenarios as protecting their (DAZ's) very real work. Yet, if I create a new human mesh figure and leverage the well-architected edge loop choices that permiate the DAZ meshes... Do I owe them anything? If I quote Mark Twain at a dinner conversation, do I owe someone a check? Again, I dunno. And I don't know how I'd valuate the elements involved, or formulate a discrete unambiguous metric. Perhaps the idea of 'giving credit' as being self-evident... is suspect itself. I kind of can't imagine it, yet most of the language I use each day is of completely anonymous sources (to me) and it works just fine, and continues to evolve, having no credit or royalty structure at all. Could art/science do this? Hard to imagine, but not impossible.
I suppose it's when creatives start to mix their work, that the line-to-cross of attribution and credit gets blurred. This is especially stark when a musician does a cover of a well-known song that is more popular than the original. The credit for the underlying content can't help but be mixed with the performance variations. Credit *should* go to both participants, but each can make a case for how their part is critical to the new success, yet were either element removed, the art ceases to exist as such.
interesting discussion and future we all face.
best,
--ms
agreed - i wonder if a valid distinction can be made between information and directives. The super-grey occurs when information inspires action, in the form of ... directives. The binding is pretty sloppy, as would be any measurability (sp?).
To add to my earlier ramblings regarding credit for work done - I think most of us are happy to see our work used and appreciated (ego level) in places like free galleries and forums, but quickly get indignant when that same work generates an explicit profit that's not shared with us (unless it was sold with the relevant license, etc.)...
Profit, being more typically economic, but in a competitive industry, watching someone get credit (say an Oscar) for plagerized work inspires a pretty strong emotional response - if not a legal one, heh. Yeah, I get that. "hey, that was *my* idea, you can use it but you certainly can't claim it as yours!" (?)
--ms
Pretty freaking amazing, right? I've been testing it here and there as ideas pop into my head and I'm nothing but impressed. It generated a number of perfect Javascript subroutines in under a minute that would have taken me up to an hour to write and test myself. When I took a long shot and asked it to write a routine for Construct 3, it replied that it couldn't write for Construct 3 (expected) but then gave me generic algorithms that woudl be simple to convert.
The biggest issues I've heard are from people who aren't testing the AI to see what it can do and where it can help but instead try to show that it's can't replace a human and come away with comments like, "it lacks the emotions that an interaction with a human being would have" or spend their time proving that it can't "pass as human". I think they're missing the point; I mean, ask it to decide which it likes better between two options and the first thing it tells you is that it's not a person.
I view it as a very, very capable assistant. I've had it write a few pitches for tv and comics, story synopses and plots, subroutines for programming, and had very decent discussions about things like the differences between Artificial Intelligence and Artificial Life, best choices for terraforming planets and moons in the solar systems and how one might do it and how long it would take, and so on. I'm not sure how much it will cost once the free testing period is done, but I hope I can afford it.
Using their Whisper technology and such could very soon provide a true digital assistant in its own window or on device like a phone or tablet that you could talk to and get both verbal and written responses using 3D avatars and generated voices.
-- Walt Sterdan