Multi-threaded loading/saving
Hi
So for some of the people with a large amount of characters, loading Genesis takes a significant amount of time. This is also true if you load heavy scenes with a lot of high res textures / models.
Would it ever be possible with the use of scripting, to create a plugin which can load content with more than 1 single thread?
So I know you can speed up genesis by removing characters (and thus all the numerous amount of morphs with it), but I would prefer keep them all in my runtime with easy access to them. This would also not really help with the other problem of slow loading times for other heavy content anyway.
One thing I did notice is Daz Studio loading / saving is strictly single threaded. This can be pretty frustraiting with my CPU which has 16 threads. 15 of them are relaxing, while my PC struggles to load content. It would be great if more of them could be put to work. I would also like to upgrade my CPU to a threadripper at somepoint, but currently the single core performance is about the same, so even if I have double the threads (and double the total cpu power), figure / scene loading times will not improve even slightly. I could get a slightly better single core performance cpu, but they are generally less powerful overall, and the gain would be 30% faster at best.
The only solution whilst still having access to everything, is using more cores / threads. I already have an PCIE3 M.2 ssd, perhaps soon a PCIE4 version, so currently that's nowhere near the bottleneck.
Is there anyway this could be done with scripting / plugins etc, even if its a big job or hard to implement?
Comments
I would suspect that much of the process is inherently single-threaded.
I think the limitation in loading from the hard drive is the hard drive, which is inherently single threaded. The read/write head can only do one thing at a time. Yeah, you can optimize that, but no matter how multithreaded D|S is it doesn't matter when it comes to loading stuff from the hard drive. I suspect SSD's, etc, aren't much different.
having just suffered thought the dreaded OMG why did I delete the figure instead of resaving the scene as a subset without it and instead waiting a couple hours for daz to delete it (because I didn't have a fresh save before adding the character)
---
it seems there would be a couple ways for daz to handle it's content in a scene .. one would be simply one long list deliniated by markers or a list of all items in an array each with it's data stored under it's name.
under 1 you have to go all the way down to the item to delete each part of it (repeat until no more item) under 2 you find it's entry point and keep deleting from that point.
Yes, I realize this is suppostion .. but how else do you explain a couple hours to delete something that was loaded in two minutes?
And no, this doesn't happen to often because I know to just do the save as subset trick.
And this by that way is what triggered my other question about how to kill a process in daz without killing the program.
--
As far as I can tell once I goof and hit the delete button and daz goes comatose I can't say cancel, like we can cancel a render.
---
And actually deleting from a scene wouldn't be accessing the hard drive... it would be removing the item from the ram being used.
--
while we all laugh about MS's 640k .. most people don't know that the original mac/adobe font definition had to be redone because they couldn't picture anyone having more than 256 fonts so they only allowed for that many ID numbers.
When the base daz was created did or could any really predict the requirements of the g8.1 basee on V1?
In this case I think the advances in the models are way outpacing that of computers.
You might have been right in the days of actual hard drives but On fast SSDs, we're barely using 1% of the drive's potential.
It is clearly a matter of threading the workload.
Without knowing the detail of what is involved that is conjectue. Daz Studio needs to link the morphs tiogether correctly, and I would think that is not a process that can be handled in parallel.
I don't know where that 1% figure came from, but saying that "It is clearly a matter of threading the workload." reminds me of that Steve Martin skit called "How to become a Millionaire" whose first line is "First, start with a million dollars..." Threading is hard. Especially when there are many threads because the problem moves from parallelizing to scheduling and resource contention.
From the performance problems, we can infer that DS's architecture is complicated; too complicated to say "Well, all we have to do is thread it." It is never that simple in apps that were not threaded from the very beginning.
I kinda think that DS is threading a lot of stuff. I started up HWiNFO64 and logged stats while loading a scene. Most of my CPU Cores began to pick up lots of activity, then slow down towards the end of the scene load.
I really should do some more structured testing to be sure. The only thing I did was make sure I didn't have any other apps running. Also I turned the display to wire frame so it would eat up much after the load. And do it with a really really big scene.
Attached is a screenshot of the table of the results.
Let me know if anyone is interested in some more tests.
I do multiprocessing and multithreading programming, so I know a little about this stuff ( probably just enough to be dangerous ).
Ohhh, the columns are time stamps recorded every 0.02 seconds.
While anyone would commend your efforts,
1) What conclusion do you think you will be able to draw from your data, having no idea what the code is actually doing, i.e. what tasks are inherently serial and which are "ridiculously parallelizable" as we say? Without knowing that, you have no contect in which to say if something could be threaded, but isn't, and could benefit from a redesign. I admit that you've gone through a cool process that I myself could not have done on Windows, but it does not seem like useful data... it's not "actionable".
2) Do you think you will be successful is lobbying DAZ to act, based on your data? Good luck with that.
3) What do you intend to do with the data you've collected? It is unlikely that you've discovered something that the DAZ devs don't already know and are in a much better position to interpret, reference point #1.
But if you were just nerding out, using some cool tools and applying your skills, I get that, and the absolute most frustrating thing about DAZ Studio is that it is closed source and we'll never get to truly play with it.
PS - It sounds like you dig this sort of thing, so why not try your hand at Blender, if you haven't already? With the source code, your analysis could be much more meaningful, you could actually have an impact, people would appreciate your efforts to make the software better, and you would probably end up talking directly to one or more of the actual developers. Everything you are not going to get here. Just sayin' :)
I wasn't saying daz needs to redesign their threading code. And did mention it was just a quick basic examination of cpu utilization. I repeated it a number of times and saw the same pattern.
Also I repeated it a few times setting the affinity to only one Core and the utililization went to 100% on those two core threads and dissappeared from most of the others and load time almost doubled. This 'kinda' implies that at many of the daz threads are somewhat parallel/async.
Nope, just gathering a little info as opposed to 'complete' speculation on the unknown code. It's not that hard to profile these tests (about a 1/2 hour of spreadsheet work the first time). And HWiNFO64 can log lots of useful information on the Cores, Storage I/O, GPU, Memory I/O and residency, ... Sometimes it can enlighten your speculation, sometimes it doesn't. But it doesn't cost much to take a look.
And not to lobby DAZ for anything. Just to see if I can figure out how to get a little quicker load time.
I don't think there is much that can be done except make sure you hard drive is not holding up the process (and don't forget to set your 'temp' and 'dson/cache' to your fastest storage, I also put the postgresql data there too).
I was looking at Blender some a few years ago. But I have too many other hobbies that I like to split my time between. The nice thing about open source is that even if you don't recompile the source, just reading the code will enlighten you on how to do things more efficiently.