Still Using NVIDIA 3060

Is there any better Graphics Card than an NVIDIA 12 GIG 3060 for rendering out images for animations. My computer is a Windows 11 with the latest DAZ Studio 4.22 and a 12th generation i9 Intel Chip and 128 gigs of RAM! I find this setup is stable and I can render for 5 days 2560 X1440 without overheating! Any suggestions for anything better? Cheers.

Comments

  • Richard HaseltineRichard Haseltine Posts: 100,948

    3080, I think, 3090, 4060Ti (16GB), 4080, 4090, A series cards (up to 48GB)

  • MattymanxMattymanx Posts: 6,908

    I have a simular system but with a 4090.  I highly recommend getting the 4090 if you have room for it in your case.

    Comparison chart for all 4090's - https://www.techpowerup.com/gpu-specs/geforce-rtx-4090.c3889

  • Will the RTX 4090 render any faster and is it as stable over long renders than the RTX 3060? The RTX 4090 24 GIG is over $3000 here in Australia.I would expect a major amount of Rendering time decrease for that price! Cheers

  • MattymanxMattymanx Posts: 6,908

    Here is an article comparing the 4090 to other cards in 3D rendering in multiple programs and render engines.  The 3060 12GB is there as well.

    https://techgage.com/article/nvidia-geforce-rtx-4090-the-new-rendering-champion/

  • The comparisons do not include DAZ Studio NVIDIA Iray! So I am not convinced that the cost of a RTX 4070 would be justified. Also overheating over days of rendering is not included! Cheers

  • KazeKaze Posts: 51

    From my experience there are 2 key specs that are the most important for rendering via Iray using an Nvidia GPU.

    1. V Ram - For you to get the most benefit of rendering, you would need enough V Ram to hold the scene within only the V Ram available on the card. Once the scene needs more V Ram than on the card, your computer will switch to using memory outside the card and utilizing the CPU instead of the GPU. There is usually a message that gets logged saying that it had to make a switch to CPU rendering. This causes a huge slowdown and will no longer provide the larger benefits of using an NVidia card. This can be mitigated in a few ways in particular,

    • getting an Nvidia card compatible with NVLink on RTX 2000 or 3000 series cards and linking 2 cards together. This allows you to add 2 of the same sku of graphics cards. So 2x 3090 will get you 48 GB of V Ram through NV Link. You can't add a 3090 and a 3080 together. They have to be the same sku. I never know how many people are doing this, and it doesn't seem very popular among gamers. Another reason is that support for SLI in gaming was essentially dropped as a promise from NVidia side. Game devs have to do additional work to make multi gpus beneficial for gaming which has not been anything that developers have put much focus on. I have never tried this myself so I don't know if this still works today.
    • There is an optimizer plug in sold in Daz store that reduces the size of the textures for any of the models used which in turn reduces memory consumption. You lose quality, but now you can render more objects. Textures are the biggest contributors to memory consumption.

    The amount of memory you need is all determined by what you want to render. I usually try to see how many different characters can I render in an example scene before the V Ram is full and it switches to CPU rendering. You can estimate that in that same scene you could render X more characters with an increase of memory proportional to what your card was able to render vs. the amount of V Ram you have. If I can render 3 people on 4 GB comfortably, you can probably render 6 people with 8 GB of V Ram. Then when you use that optimizer program you can potentially multiply the number of characters by the amount size reduction in the textures by percent. Of course these are estimates, but in my case I would say they are pretty good napkin math estimates.

     

    2. Cuda Cores - The number of Cuda cores has been the most consistent specification that I use for rendering times to achieve a certain number of samples. Though I don't use them like a core/sample. The best way to have an intuition of the amount of speed increase is to render a scene with your current card. Specify a number of samples that you want to render your image. Let's say it took 10 minute to render my scene at 1000 samples. If you get 2 of the exact same card and install them for use with Iray on your system, you will cut your render time roughly in half. You could also install a card that has 2x the number of Cuda cores to also roughly cut your render time in half. This is much more energy efficient usually and gets you around the same performance while at a lower energy consumption. The additional circuitry on the card to provide the necessary power conditioning has potential for heat loss due to electrical resistance. The less things that the power has to go through, usually the less power is wasted. Keep in mind, the graphics card will probably be more expensive since it is compacted in a smaller form factor which is harder to produce. I like to run more efficient because all the wasted heat will be dumped in whatever room you have your system in. It is like a small space heater. In the long run you will save on energy costs as you are not consuming as much to run the card and you are not needing to cool down your room to account for the additional heat you are generating.

     

    Another point though, once a new generation is released, the old generation usually drops in price meaning you could get a 50% reduction in render time for considerably less money. So long as you have a power supply that can keep up. If you have to upgrade your power supply, I would recommend future proofing by choosing a high output wattage. Power delivery does not change as often as the semiconductor technology. You don't normally upgrade it every few years. Also a power supply from 10 years ago providing 1000W will have about 90% of the same functionality as the newest one. Most of the time the difference is extra conveniences like a connector for something very specific. Most of the time these can be resolved by high quality adaptors. The efficiency will usually be higher on the newer power supplies though. There are good estimators online to guess what size power supply you need. Unless you are needing NASA level tolerances, then you will be fine.

     

    I always find a more meaningful estimate by comparing based upon what I have and what I would normally render rather than benchmarks that probably don't mean much to me because they aren't anything that I would want to render anyway. Maybe they might do some kind of hard benchmark with a bunch of features that I don't really care about. At least with this kind of intuitive approach I can get a better understanding of my next graphics card purchase and how it actually matters to the my normal usage. Then I can figure out the kind of scale that I can achieve in my renders.

    Let's say I am running an RTX 3090 with 24 GB of RAM. Let's say I want to buy the RTX 5090 as my next card. The 3090 has 10,496 cuda cores and the 5090 has 24576. It would be like buying 2 and a half additional 3090s, but way less energy consumption. Also you could divide your normal render time by 3.5. The amount of time savings to me is worth it. When we have a society that makes most everyone work 5 days a week, we only have 2 days a week to actually live life. My renders could take 24 hours to complete or instead it could take 6.857 hours. Buying time with money is not possible, but saving time with money is. Just be sure you can afford it. The money you spend on a graphics card could be better used to feed your family which definitely cannot be bought with money.

  • GordigGordig Posts: 10,058
    edited March 30

    csteell_c2893e4ab6 said:

    The comparisons do not include DAZ Studio NVIDIA Iray! So I am not convinced that the cost of a RTX 4070 would be justified. Also overheating over days of rendering is not included! Cheers

    Every benchmark shows the 4090 being roughly 4-5 times faster than a 3060 (judging just by eyeballing the graphs, it might be more substantial than that), so what do you think is so different about Iray that the advantages would be significantly less?

    Post edited by Gordig on
  • KazeKaze Posts: 51

    According to the quote it looks like he is looking towards the 4070 not the 4090. In that case I would agree with him especially when you can get used 3090's for about $819 for current buy it now listings on ebay. It is $280 more than the current price of a 4070 ($539 new) and you get a significant jump in performance and memory. More than 2x the performance increase for a single card.

  • csteell_c2893e4ab6csteell_c2893e4ab6 Posts: 481
    edited April 1

    Just a thought! Seeing money is tight! How would two Nvidia RTX 3060'S 12 gig Cards go! Would that help my render times? A FURTHER QUESTION.. Anyone using a Nvidia RTX 4090 24G gig card? I am interested to know how big a power supply they have and what is the heat like in a small room? Cheers

    Post edited by csteell_c2893e4ab6 on
  • KazeKaze Posts: 51

    csteell_c2893e4ab6 said:

    Just a thought! Seeing money is tight! How would two Nvidia RTX 3060'S 12 gig Cards go! Would that help my render times? A FURTHER QUESTION.. Anyone using a Nvidia RTX 4090 24G gig card? I am interested to know how big a power supply they have and what is the heat like in a small room? Cheers

    With you starting with an RTX 3060, adding 1 additional would cut your renders in half if you installed both on the system. The total number of CUDA cores would then be 2x what you had before. If you have 3 PCI Express slots you could install 3 RTX 3060 cards which would cut your current time to 1/3 of what you are seeing now.

    I forgot to mention before that NVLink connection was only available on the RTX 3090 for the 3000 series cards. If you were wanting to combine the memory you can't on an RTX 3060.

    No matter what set up you have, the heat output will make a small room hot. Rendering leverages the GPU at pretty much max power consumption. If you look up the specs on the cards they usually have estimates on the total power consumption. The power consumption is considered the electrical load on the system which ultimately results in the generation of heat. For the RTX 4090 it has a power dissipation estimate of 450W of power. You can expect that the card alone would be dissipating that power in the form of heat. For a relatable comparison, a space heater can dissipate about 1500 Watts of power into a room.

    Imagine if your room was well insulated and doesn't have much circulation to remove the heat from the room. Now put one of those space heaters inside the room and run it for about 10 minutes. It could get hot inside that room depending on how small it is. Now imagine if you had the RTX 4090 alone and if it was dissipating the full 450 Watts of power into the room. It would take about 30 minutes of operation before it reaches the same amount of power dumped into the room.

    If you think about the two RTX 3060 cards you are considering, that card has a spec of around 170 W so running two would be running at 340 Watts of power dissipation. It would take about 45 minutes of rendering before it reaches the same amount of heat energy dumped into the room.

    Now consider the amount of CUDA cores that each set up has.

    1x 4090 = 16384 CUDA cores

    1x 3060 = 3584 CUDA cores

    2x 3060 = 7168 CUDA cores

    Comparing the CUDA core count, the single RTX 4090 would perform the roughly the 2.29x the amount of work necesary to render the same number of rendering samples as the 2x 3060 setup. Or if you want to think about it in terms of the 2x 3060, then you could render the same number of samples in a little under 44% of the same amount of time as the 2x 3060 cards when comparing the CUDA core count. Or roughly half.

    Let's say that you have an image that has been set to render 1000 iray rendering samples. Let's say that it takes you 30 minutes on a single RTX 3060.

    With 2x RTX 3060 setup, it'll take you about 15 minutes.

    With the single RTX 4090 setup, it'll take you about 7 minutes to do the same.

    In comparison to the 1500W space heater,

    For the single RTX 3060 card, it would be like if you turned on the space heater for about 3.4 minutes

    For the 2x RTX 3060 cards, it would also be like if you turned on the space heater for about 3.4 minutes. The power output is 2x as much, but you are rendering for 1/2 the amount of time. The rate of power output is pretty much cancelled by the cut in the total amount of time.

    For the single RTX 4090 card, it would be like if your turned on the space heater for about for about 2.1 minutes. Although the power output is high, you are cutting the total amount of time by a factor that is more than the increase of power output. It makes it more power and time efficient to achieve the same number of samples.

    The numbers here are only considering the heat from the graphics cards. The system itself will have some contributions as well. Also the calculations are rough but should help to explain some of the intuitions when it comes to planning what your upgrade would be like.

    For the room where your computer is located, it could be that just having the space heater on for 1 minute could make it unbearable. You may need to add an air conditioner to the room, or you might be able to dump the exhaust heat from your system into your attic with some clever duct work attached to your system. Then your heat is not directly heating up your room. Just watch out for bugs, clogs, and critters.

    If you want a guess on how big of a power supply you should use, it is common to estimate a power supply with power output that is 2x your computer's estimated total dissipated power (TDP) for all of it's components combined. By running at 2x you have a safe margin to handle any sharp unexpected spikes in power consumption so that your power supply does not burn out and gets damaged as easily. You can probably get away with 1.5x the TDP, but 3d rendering is a heavy power consuming operation. Running at 2x the max power output just makes sense to me.

  • PerttiAPerttiA Posts: 10,024

    Kaze said:

    I forgot to mention before that NVLink connection was only available on the RTX 3090 for the 3000 series cards. If you were wanting to combine the memory you can't on an RTX 3060.

    NVLink was also available for RTX 2080 and 2070 Super 

  • KazeKaze Posts: 51

    PerttiA said:

    Kaze said:

    I forgot to mention before that NVLink connection was only available on the RTX 3090 for the 3000 series cards. If you were wanting to combine the memory you can't on an RTX 3060.

    NVLink was also available for RTX 2080 and 2070 Super 

     

    Yes that is correct. OP was looking for another 3060 though. For the 3000 series it was only available on the RTX 3090. I would not necessarily recommend going back as far as the 2000 series cards. They are still capable, but it has been quite some time since they were released. If money is being spent to get new hardware it would probably be better suited for less used cards to last until they can consider the next upgrade. You never know how the previous owners used their cards and the kind of work that is needed for rendering is computationally intense. Years of expansion and contraction from heating and cooling cycles will have degraded the quality of the solder joints which may need reflow to get it working again if it breaks. With every generation you go back, the more of a risk you take. Though you may be able to find some lightly used 2000 series still.

    Also NVLink is a dying technology. From what I remember PCI Express gen 5 is supposed to have the bandwidth to be comparable to some early generations of NVLink and so they are likely not offering it on the 5000 series like how they took it away for the 4000 series. There was also an advancement in data throughput utilizing a kind of connection between SSD memory and GPUs called RTX IO which was a kind of hardware acceleration for handling data. This is the kind of stuff that allows for streaming of high resolution textures from disk on the fly to keep up with some of those early UE5 tech demos. Though I think they were using Direct Storage at the time. I don't even really know what utilizes RTX IO at the moment. My guess is maybe some scientific applications. I always suspected that with the data being able to transfer so quickly between SSD and the GPU, that we could potentially overcome the limitations of onboard V Ram capacity. Can you imagine being able to use the entirety of your SSD for storing a scene to render in Iray? You could probably render a small town. I've always been hopefull that it is all leading to a real time path rendering setup that we can all have fun in VR or something like that.

    Thankfully 12 GB on the 3060 can get you plenty far with the number of resources you can render in Iray so OP should be ok if they did go with another 3060. I used to be limited to 4 GB when I was rocking dual GTX 980 cards. I miss the wacky designs they used to have where they would duct tape multiple cards together. With 4 GB I used to be limited to about 3 to 4 Victoria 6 models in a lightly decorated environment. Now I have 24 GB to work with. It is freedom at a price that I paid off a while back.

    For doing some AI work, the more memory you can get, the better. It's a bit hard to get 48 GB of V Ram without having to pay for some premium professional cards. From what I hear they are not great for gaming though which is another hobby of mine. I would rather get the consumer grade cards because of that. Still, advancements are being made to optimize AI work to run on limited V Ram systems. I think as low as 4 GB.
     

  • csteell_c2893e4ab6csteell_c2893e4ab6 Posts: 481
    edited April 3

    I was talking to my computer technician and he suggested I put to the forum about the use of NVIDIA Quatro Cards.Does anyone use these for rendering in DAZ Studio? Are they a better choice over the NVIDIA RTX Cards? Seeing I might be spending over $3000 AUS on a Graphics Card I would like to get the best bang for my Buck! Cheers.

    Post edited by csteell_c2893e4ab6 on
  • GordigGordig Posts: 10,058

    csteell_c2893e4ab6 said:

    I was talking to my computer technician and he suggested I put to the forum about the use of NVIDIA Quatro Cards.Does anyone use these for rendering in DAZ Studio? Are they a better choice over the NVIDIA RTX Cards? Seeing I might be spending over $3000 AUS on a Graphics Card I would like to get the best bang for my Buck! Cheers.

    The terminology gets tricky here, both because NVidia seems to be getting away from using the term Quadro and because the last few generations of Quadro cards HAVE been RTX cards. The real distinction is between Quadro and GeForce cards, i.e. professional vs. consumer cards. Quadros use a lot less power than GeForce cards, but don't perform quite as well. A 3090/4090 will render faster than a 24GB Quadro card of the same generation, but the Quadro will use less power and be more stable/durable. RTX is just a technology, and all new cards, whether GeForce or Quadro, utilize that technology.

  • KazeKaze Posts: 51

    I have never used the pro cards for rendering, but historically they have been siginifcantly more expensive for similar specs on the consumer grade cards. I could not justify the prices for the pro since I am only doing gaming and rendering. Though they do have some benefits like the ability to have high V Ram on a single card.

    One of the things that I suspect makes them expensive is that the V RAM is ECC Ram which is supposed to be better for high accuracy computing. This is something that you would not really benefit from unless you were doing some high level simulations and computations that required a high degree of accuracy. This is something that I would assume researchers and enterprise level companies would find more useful. Those kinds of markets can afford to spend a bit more which leads to supply and demand reaching the price point where they are now.

    There are probably more benefits, and probably some additional benefits for the AI workload side. For use in Iray, I would say it is probably not worth it to go with a Pro level card. I must repeat though, I have never used them so I have only the specs to go by.

  • When funds are available I am going to go ahead with a RTX 4090! Can anyone tell me the difference between various models? Such as Gigabyte GeForce RTX 4090 WINDFORCE V2 24G, MSI, GeForce RTX 4090 VENTUS 3X E 24G OC, ASUS TUF Gaming GeForce RTX 4090 24GB OG OC Edition, or any others? Cheers.

  • Richard HaseltineRichard Haseltine Posts: 100,948

    Geernally they are overcl;ocked (running  faster than the base specofocation) to various degrees, and may have additional cooling. Overclocking isn't necessarily the friend of stability, but they do usually try to design them to work without trouble - still, you are paying extra for the overclocking and may find you need to turn it down for extended renders.

  • KazeKaze Posts: 51

    You may want to be aware that we are about to be entering into the next generation of graphics cards pretty soon. The RTX 5090 is predicted to be released around 4th quarter of this year. If you're going to be spending the money to get the top of the line model today, you might want to consider waiting towards the release of that card in about 6 to 8 months. Graphics cards have about 2 years of time before a newer model replaces them. If you can wait, then when the new card comes out you will be in a better position since the price of the previous generation is likely to fall and the price of the new cards may take hold of the current pricing.

    It is likely that supply will be affected by scalpers though. Within the past few generations scalpers made it difficult to get new cards. When I got my 3090, the only way I could get it was if I bought a prebuilt computer with the card installed. I was long overdue for a refresh anyways so it worked out. The supply didn't stabilize until much later. By then Crypto mining hit a big crash and one of the bigger Cryptocurrencies, Ethereum, was beginning to switch to a mining method that was not based upon pure number crunching with graphics cards.

    In the release of the 4000 series I thought it was a bit of justice served when scalpers found themselves having tons of cards that they could not sell because market demand was much lower at the time. People had the choice of buying brand new cards with full manufacturer warranty, or buying scalped cards which were more expensive and several months into their warranties. I feel like the market appetite cooled down a bit after the 3000 series and people had a lot of spite towards the scalpers. Personally I hate scalping of anything. Also the 3000 series was when the Real Time Raytracing feature moved to its second generation so I would guess more people were willing to get on board with the RTX technologies by then as well.

    Still around when the next generation launches you will find that people begin to sell the previous generation cards at a better price on ebay so they can get money towards the new release.

    If you don't want to wait that is fine. A 6 to 8 month span is quite long and you would still be making a good sized jump in performance for the money you are about to spend for the 4090.

Sign In or Register to comment.