CPU ASSIST w GPU

marcelle19marcelle19 Posts: 171
edited February 2023 in Daz Studio Discussion

I was told some time ago - by someone here - to turn on the CPU assist with my NVIDIA GeForce GTX 1660 SUPER, and it helped a lot. I went to an RTX 3060 from PNY, and it worked great for about 3 months, then the fan went loose or something, and it almost fried my system.

So, I put the NVIDIA GeForce GTX 1660 SUPER - a difference of quite a bit - and the computer started working again, with CPU assist turned on in Daz Studio.

What I am wondering about: Sometimes (quite a lot) the GPU takes the major part of the load, with the CPU hardly ever reaching up to 20 percent of the load, but at other times, the CPU takes almost all the load and the GPU hardly takes any at all (naturally, this lengthens render times). I use the same settings in the Render Settings, same amount of lights, etc., and it doesn't seem to have any rhymn or reason why the CPU sometimes takes almost all the load of  a render and the GPU does hardly anything on those renders.

I'm sure all of  you more knowledgeable people know something I don't about this. If you could give me a clue ('cause yeah, I'm clueless), I would greatly appreciate it.

EDIT: When the CPU takes most of the load, the render history frequently looks like this: rendering frame 1, rendering frame 2, rendering frame 3, rendering frame 4 - then it loads all the prelimaries again and repeats, sometimes a lot. The render still progresses, but it doesn't keep counting out the frames in sequence, just goes from 1-4 over and over again.

Thank you. - Bill

Post edited by marcelle19 on

Comments

  • In general, having the cpu enabled is going to cause the render to be slower, not faster. Epyc, or xeons, maybe a threadripper or other HEDT, with huge core counts and/or high clock speeds, may not be as much of a bottleneck.

    As an example, a general test render i use, cpu(8c/16th 3.3ghz, dual xeon server) rendering takes 56 seconds, gpu(Tesla m40) rendering 10 seconds, cpu and gpu, 14 seconds.

     

    The cpu and gpu loads you'reporting are fairly normal, at least in my experience.

    When in gpu only mode, you will get, at least, one core per gpu in your system at ~100%. For instance, a 4c system will report ~25% cpu utilization during rendering, with a single GPU. With multiple gpus, this number will go up.

    When in GPU+CPU mode, it's not uncommon to see variations in utilization, between the two. I don't use this mode, due to the bottleneck, so i won't go into any suppositions.

     

    If the CPU is taking all the load, it's possible that the gpu has dropped out, due to exceeding the Vram of the gpu. A GTX 1660 super, only has 6GB of vram.

    If it's the only gpu in your system, part of that vram is going to be lost to the OS, and any software you're using, including DS.

    I'd suggest using GPU-z, Nvidia-smi, or another gpu monitor to keep track of the amount of vram being utilized before rendering. I'd also suggest not using iray preview, and adjusting the viewport settings(Edit>Preferences>interface, or F3>interface) to their minimums to reduce vram utilization while working.

    If you have certain settings enabled, such as bloom filter, or denoising, the vram utilization can fluctuate, often causing the drop to cpu. Sometimes, it will go back to gpu rendering once the vram usage drops to what the gpu can handle. Mileage varies.

     

    When it comes to rendering, you need to consider everything. Render settings, lights, geometry, materials, even camera location. Even a 'minor' change to any of these, can cause major differences in vram/sysram usage and render speed.

     

    As a for instance, a G8f, with nothing else in the scene, all default render settings, and 'front' camera, takes ~1.6GB of vram at render. If i up the Sub-d to 5, the vram utilization jumps to 5.3GB.

    It also takes over a minute to render, 1m19s, due to the cpu processing, which is single core by the way, for the difference of Scene sub-d(Subdivision level) to Render Sub-d(Render SubD Level(minimum)).

    Turning off limits and adjusting the Subdivision Level to 5 reduces the render time to ~33 seconds, but requires a bit of time for the software to update the change, which will look like a 'hang' in the software. It took my system about 40seconds to make the change. On a single render, not really worth it, but on an image series, or multiple renders, definitely will be.

    The increase in Sub-d level, also has a corresponding increase of vram utilization, before rendering. In my case, it went from ~100MB to ~1.1GB. My system reported vram utilization was 780MB before the change and 1.8GB after, in GPU-Z.

    The total vram utilization of this test, would easilty exceed the vram of the 1660 super you have.

     

    Moving the camera in closer, going from a full body to just upper torso and head, the vram utilization stays the same, but the render time goes up, from 33s to 1m5s.

    I was able to shave ~15-20 seconds off that, in subsequent tests, but i had to follow a very specific path of changes to get that result. Wasn't really worth is, as it took longer to do that than the savings in render time.

     

    Once you start adding clothing, hair, and other assets, it gets way more complicated.

     

     

     

     

     

     

     

     

     

     

  • DrunkMonkeyProductions said:

    In general, having the cpu enabled is going to cause the render to be slower, not faster. Epyc, or xeons, maybe a threadripper or other HEDT, with huge core counts and/or high clock speeds, may not be as much of a bottleneck.

    As an example, a general test render i use, cpu(8c/16th 3.3ghz, dual xeon server) rendering takes 56 seconds, gpu(Tesla m40) rendering 10 seconds, cpu and gpu, 14 seconds.

     

    The cpu and gpu loads you'reporting are fairly normal, at least in my experience.

    When in gpu only mode, you will get, at least, one core per gpu in your system at ~100%. For instance, a 4c system will report ~25% cpu utilization during rendering, with a single GPU. With multiple gpus, this number will go up.

    When in GPU+CPU mode, it's not uncommon to see variations in utilization, between the two. I don't use this mode, due to the bottleneck, so i won't go into any suppositions.

     

    If the CPU is taking all the load, it's possible that the gpu has dropped out, due to exceeding the Vram of the gpu. A GTX 1660 super, only has 6GB of vram.

    If it's the only gpu in your system, part of that vram is going to be lost to the OS, and any software you're using, including DS.

    I'd suggest using GPU-z, Nvidia-smi, or another gpu monitor to keep track of the amount of vram being utilized before rendering. I'd also suggest not using iray preview, and adjusting the viewport settings(Edit>Preferences>interface, or F3>interface) to their minimums to reduce vram utilization while working.

    If you have certain settings enabled, such as bloom filter, or denoising, the vram utilization can fluctuate, often causing the drop to cpu. Sometimes, it will go back to gpu rendering once the vram usage drops to what the gpu can handle. Mileage varies.

     

    When it comes to rendering, you need to consider everything. Render settings, lights, geometry, materials, even camera location. Even a 'minor' change to any of these, can cause major differences in vram/sysram usage and render speed.

     

    As a for instance, a G8f, with nothing else in the scene, all default render settings, and 'front' camera, takes ~1.6GB of vram at render. If i up the Sub-d to 5, the vram utilization jumps to 5.3GB.

    It also takes over a minute to render, 1m19s, due to the cpu processing, which is single core by the way, for the difference of Scene sub-d(Subdivision level) to Render Sub-d(Render SubD Level(minimum)).

    Turning off limits and adjusting the Subdivision Level to 5 reduces the render time to ~33 seconds, but requires a bit of time for the software to update the change, which will look like a 'hang' in the software. It took my system about 40seconds to make the change. On a single render, not really worth it, but on an image series, or multiple renders, definitely will be.

    The increase in Sub-d level, also has a corresponding increase of vram utilization, before rendering. In my case, it went from ~100MB to ~1.1GB. My system reported vram utilization was 780MB before the change and 1.8GB after, in GPU-Z.

    The total vram utilization of this test, would easilty exceed the vram of the 1660 super you have.

     

    Moving the camera in closer, going from a full body to just upper torso and head, the vram utilization stays the same, but the render time goes up, from 33s to 1m5s.

    I was able to shave ~15-20 seconds off that, in subsequent tests, but i had to follow a very specific path of changes to get that result. Wasn't really worth is, as it took longer to do that than the savings in render time.

     

    Once you start adding clothing, hair, and other assets, it gets way more complicated.

     

    Thank you! So much information! I understood more of what you said than I would have a year or so ago, and I think I can work out the other stuff I didn't get right away.

    Sadly, since the GeForce GTX 3060 I bought (when the prices came down because their new line was coming out) busted after just a few months, and of course, the return policies are all for a month or less. I tried contracting PNY for the warranty, but so far I can't reach anyone but the operator who sends me to a phone line to leave a message that never gets answered. That 3060 almost cost me my computer. You wouldn't believe the weird things it did - I think one of the fans got a chip in it, went off balance, and burned out, even though it starting spinning again, it would shut down the system every time.

    Anyhow, I'll have ot wait a long while before I buy another one, and make sure I get Best Buy's guarantee plan. No more buying online, either. In store, with all the bells and whistles they provide for the right price.

    Thanks again for all the information. I'm going to have to read this a few times to get it all. :)

    I seem to have found a balance where the GPU takes most of the load - can't recall the settings, off-hand, but the GPU takes around 90-10o percent of the load, so it's working.

     

     

     

     

     

     

     

     

     

     

     

  • namffuaknamffuak Posts: 4,146

    I think you'd get better results on the warranty by using the PNY website. I've not worked with them, but I had an MSI 1080 die about 13 months after purchase and the process on the MSI website was painless.

  • marcelle19marcelle19 Posts: 171

    namffuak said:

    I think you'd get better results on the warranty by using the PNY website. I've not worked with them, but I had an MSI 1080 die about 13 months after purchase and the process on the MSI website was painless.

    Thank you! I never thought of going to their web site - I just did, the chat was closed, but I left them a boat-load of information - so many different numbers on the board and the sales slip, and I left them all. :) Hope you're right, and it helps - I'm going to try to get them when they're open if I don't hear from them soon.

    Thanks again!

Sign In or Register to comment.