Two Nvidia Titan X enough for gpu rendering today 2024?

2

Comments

  • milliethegreat said:

    Also I CANNOT AFFORD RTX!!!!  I have $173 discretionary a month now so a P40 is what I can afford. Am I safe with that? Or is that not supported soon too?

    Go with the p40 if you need the vram, if not, hold off for a month or so, saving the money you'd spend, and get an RTX 3060(12GB).

    $280 on newegg or amazon(new), $220 on ebay(used).

    P40's are ~$160 right now on ebay.

    $60-120 difference in price for the 3060 vs the p40.

    Or, save up for about 8 months and get a 3090(~900 refurb on newegg or ebay)

    Regarding the support dropping on pascal series, probably not for a few more years, but take that with a grain of salt.

     

  • Silver DolphinSilver Dolphin Posts: 1,608

    I would not render in Daz Iray if I dont have a expensive video card. Just do it in Blender and use the bridge. Blender works better than studio and you don't have to spend money on new video card.

  • alan bard newcomeralan bard newcomer Posts: 2,202

    I would love to had two of them... I have one
    The tractor and trailer and the car and the 3 cats which are 6 genesis figures ... 3 g8 bodies and 3 g1 heads probably took 10/20 minutes 3000x1500. I don't pay that much attention just do something in another window. 
    Oh and this is on win7 in 4.21 cause the computer doesn't want to do win 10 (a curse on the house of Asus) but the dual 2630v3 xeons still rate at 200 in cpu ranking and the win 10 i7-7000 several generations newer is only at 900. 
    but yes, I do a lot of building layers in PS ... not only faster renders but more control 
    ---
    The second one used to be a couple hours at 4kx2k 
    but break it down into pieces and none of the pieces take more than 5 minutes. 
    ---
    the processing on the newer cards is faster but I think you have to hit a 3080 to equal the 12g ram and get more cudas -- lower cost upgrades are less ram and no increase of cudas. 
    I used to have my 980ti 6g in with it and iray seemed to use the small card to it's max even when the big card was running full ... 
     

    momtile mumble seat 1s.jpg
    1596 x 838 - 946K
    23 hb base3-2-40per2.jpg
    2400 x 1200 - 1M
    23 hb base3-alt.jpg
    6000 x 3000 - 2M
  • PerttiAPerttiA Posts: 10,024

    alan bard newcomer said:

    the processing on the newer cards is faster but I think you have to hit a 3080 to equal the 12g ram and get more cudas -- lower cost upgrades are less ram and no increase of cudas. 
    I used to have my 980ti 6g in with it and iray seemed to use the small card to it's max even when the big card was running full ... 

    The non-RTX cards use an additional 1GB of VRAM for emulation of the RTX functions
    W10 uses about 800MB's more VRAM than W7 

  • milliethegreatmilliethegreat Posts: 288

    Silver Dolphin said:

    I would not render in Daz Iray if I dont have a expensive video card. Just do it in Blender and use the bridge. Blender works better than studio and you don't have to spend money on new video card.

    I can't do that because I use geometry shells dforce fur and strand based hair. If those were exportable to other software I'd have done that already and we wouldn't be here having this conversation.

  • PerttiA said:

    The non-RTX cards use an additional 1GB of VRAM for emulation of the RTX functions
    W10 uses about 800MB's more VRAM than W7 

    This isn't exactly accurate.

    Maxwell series, titan x, M40, m4000, etc, don't use the additional vram. Pascal, 1080, P40, etc, do though.

    In testing and production, my P40 is always using ~1GB more vram than my m40 or m4000.

    I've tested this with pre-RTX versions of DS and the vram usage didn't change on the maxwells between the two.

     

    As for w10 using more vram, if you're referring to the o.s. itself, you might want to check your setup and display settings, and may need to update the o.s. or drivers for your gpu(s). In a test bench system, on a fresh install and clean setup, i'm only seeing ~100MB used in Win10(Gpu-z or taskmanager), with a single 1080 monitor. On my primary workstation(server 2019), i'm only seeing ~300MB, but that's driving three, 1080p monitors, and a lot of shortcuts and folders on the desktop.

    Even with maxed out interface settings, i barely got above 400MB on my m4000 with DS open.

     

     

     

     

  • PerttiAPerttiA Posts: 10,024

    DrunkMonkeyProductions said:

    PerttiA said:

    The non-RTX cards use an additional 1GB of VRAM for emulation of the RTX functions
    W10 uses about 800MB's more VRAM than W7 

    This isn't exactly accurate.

    Maxwell series, titan x, M40, m4000, etc, don't use the additional vram. Pascal, 1080, P40, etc, do though.

    In testing and production, my P40 is always using ~1GB more vram than my m40 or m4000.

    I've tested this with pre-RTX versions of DS and the vram usage didn't change on the maxwells between the two.

     

    As for w10 using more vram, if you're referring to the o.s. itself, you might want to check your setup and display settings, and may need to update the o.s. or drivers for your gpu(s). In a test bench system, on a fresh install and clean setup, i'm only seeing ~100MB used in Win10(Gpu-z or taskmanager), with a single 1080 monitor. On my primary workstation(server 2019), i'm only seeing ~300MB, but that's driving three, 1080p monitors, and a lot of shortcuts and folders on the desktop.

    Even with maxed out interface settings, i barely got above 400MB on my m4000 with DS open.

    Ok and you are driving your monitors with the GPU whose VRAM usage you are looking at?

    I'm using W7, driving 3 monitors with my RTX 3060 12GB, and Windows takes 200MB's of VRAM
    W10 using more VRAM comes from reading countless of posts here in the forums.
    DS log also reports W10 users having less VRAM available (1GB less than installed on the card)

  • PerttiA said:

    Ok and you are driving your monitors with the GPU whose VRAM usage you are looking at?

    I'm using W7, driving 3 monitors with my RTX 3060 12GB, and Windows takes 200MB's of VRAM
    W10 using more VRAM comes from reading countless of posts here in the forums.
    DS log also reports W10 users having less VRAM available (1GB less than installed on the card)

    Thought i was pretty clear with the way I formatted my response, but yes, the M4000 is the video output card in my workstation.

    The fresh install was using a Quadro k2000D, and checking my other systems, NVS-510, K2200, Firepro S9000, or WX4100, i'm getting similar results of ~100MB(or less) for single or no monitor(i run some systems headerless with remote management, Tight VNC).

    Did have a bit of a brain fart, i'm so used to running task manager, GPU-z and a command prompt with Nvidia-smi running that my initial reported vram use was a bit high.

    Closing out at least two of the three dropped vram utilizaition down to ~200MB(217MB to be exact) on the M4000.

     

    Regarding the log file, at least on my M4000, i did see it reporting ~1GB(1.281GB(6.719GiB of 8GiB)) less vram available. However, if i deduct the system used vram, the number drops to ~945MB

    Checking the logs on my other systems, this seems to be a dynamic allocation depending on the GPU's available Vram.

    My K2200, ~660MB(3.341 of 4GB) log reported. 454MB deducting for system usage.

    My k2000D reports, ~344MB (1.656 of 2GB)( this is under 4.12 as kepler gpus are no longer supported). 121MB when system usage is deducted.

    This is based on initial startup of ds.

     

    My secondary gpus however, show no reduction except what the system is using.

    My p40  and m40's report only ~100MB(23.9 of 24GB, and 11.864 of 11.925 respectively).

    I'd take the log report with a massive grain of salt as it's not exactly reporting the total amount of vram accurately.

     

     

     

     

     

     

     

     

  • milliethegreatmilliethegreat Posts: 288

    Soooo....m40 are no longer supported by iray for sure? I don't want to spend $160+ on a p40 if I can get a m40 for about $80. That's an astronomical difference in price when it comes to my very fixed budget.

  • milliethegreatmilliethegreat Posts: 288

    Also is a p40 or m40 even worth it compared to cpu rendering anyway???? I'm running a Ryzen 5 4600 G stock fan cooled. I'm doing cpu rendering 

  • milliethegreatmilliethegreat Posts: 288

    DrunkMonkeyProductions said:

    PerttiA said:

    Ok and you are driving your monitors with the GPU whose VRAM usage you are looking at?

    I'm using W7, driving 3 monitors with my RTX 3060 12GB, and Windows takes 200MB's of VRAM
    W10 using more VRAM comes from reading countless of posts here in the forums.
    DS log also reports W10 users having less VRAM available (1GB less than installed on the card)

    Thought i was pretty clear with the way I formatted my response, but yes, the M4000 is the video output card in my workstation.

    The fresh install was using a Quadro k2000D, and checking my other systems, NVS-510, K2200, Firepro S9000, or WX4100, i'm getting similar results of ~100MB(or less) for single or no monitor(i run some systems headerless with remote management, Tight VNC).

    Did have a bit of a brain fart, i'm so used to running task manager, GPU-z and a command prompt with Nvidia-smi running that my initial reported vram use was a bit high.

    Closing out at least two of the three dropped vram utilizaition down to ~200MB(217MB to be exact) on the M4000.

     

    Regarding the log file, at least on my M4000, i did see it reporting ~1GB(1.281GB(6.719GiB of 8GiB)) less vram available. However, if i deduct the system used vram, the number drops to ~945MB

    Checking the logs on my other systems, this seems to be a dynamic allocation depending on the GPU's available Vram.

    My K2200, ~660MB(3.341 of 4GB) log reported. 454MB deducting for system usage.

    My k2000D reports, ~344MB (1.656 of 2GB)( this is under 4.12 as kepler gpus are no longer supported). 121MB when system usage is deducted.

    This is based on initial startup of ds.

     

    My secondary gpus however, show no reduction except what the system is using.

    My p40  and m40's report only ~100MB(23.9 of 24GB, and 11.864 of 11.925 respectively).

    I'd take the log report with a massive grain of salt as it's not exactly reporting the total amount of vram accurately.

     

     

     

     

     

     

     

     

     

    Ummmmm....question. How are you getting a p40 to work with iray? I just called Nvidia technical support and the guy on the phone insisted on RTX and ONLY RTX. I'm now hesitant to pay $160 on the p40 if it doesn't work with iray at all or will end support in before years end. Your thoughts. You're probably right though. The guy on the phone might just be trying to steer me towards RTX to make money or because his boss/company requires him to.

  • milliethegreat said:

    Soooo....m40 are no longer supported by iray for sure? I don't want to spend $160+ on a p40 if I can get a m40 for about $80. That's an astronomical difference in price when it comes to my very fixed budget.

    M40's are still supported, they're just deprecated. Which means any update to iray/DS could render them useless. Just check the update thread when a new version of DS rolls out to see if they've been dropped or not.

    I'm currently running a set of 4 on my render box with DS 4.22.0.16(current version as of this posting), on driver 551.61 on server 2019.

     

    I'd still suggest investing in the p40's instead, even with the additional overhead DS/iray is going to require for emulation of rtx. About 1GB more than rendering on the m40's.

    The p40 is also going to be faster than the m40. Newer architecture(pascal vs maxwell), more cores(3840 vs 3072 resp.) faster(948mhz vs 1303mhz), power draw is the same for each at around 250W max.

    You will get better performance with two m40's, but at double the power draw. you're still not going to get double the perfromance(benchmarking, P40:8m28.18s, m40:11m36.7s, 2xm40:5m42.21s on my systems), and you'd have half the vram available. Vram does not stack unless using NVlink/switch enabled cards which these aren't.

     

    milliethegreat said:

    Also is a p40 or m40 even worth it compared to cpu rendering anyway???? I'm running a Ryzen 5 4600 G stock fan cooled. I'm doing cpu rendering 

     Very much so.

    To put this in perspective, my 20c/40th sever(2x e5-2680v2) benchmarks at 42m 10.7s.

     

     

  • milliethegreatmilliethegreat Posts: 288

    If I get an m40 over a p40 (because I'd rather pay $80 as opposed to $160 if I can) how many months or years would I get out of it before it's useless for iray/DS?

  • milliethegreatmilliethegreat Posts: 288

    DrunkMonkeyProductions said:

    milliethegreat said:

    Soooo....m40 are no longer supported by iray for sure? I don't want to spend $160+ on a p40 if I can get a m40 for about $80. That's an astronomical difference in price when it comes to my very fixed budget.

    M40's are still supported, they're just deprecated. Which means any update to iray/DS could render them useless. Just check the update thread when a new version of DS rolls out to see if they've been dropped or not.

    I'm currently running a set of 4 on my render box with DS 4.22.0.16(current version as of this posting), on driver 551.61 on server 2019.

     

    I'd still suggest investing in the p40's instead, even with the additional overhead DS/iray is going to require for emulation of rtx. About 1GB more than rendering on the m40's.

    The p40 is also going to be faster than the m40. Newer architecture(pascal vs maxwell), more cores(3840 vs 3072 resp.) faster(948mhz vs 1303mhz), power draw is the same for each at around 250W max.

    You will get better performance with two m40's, but at double the power draw. you're still not going to get double the perfromance(benchmarking, P40:8m28.18s, m40:11m36.7s, 2xm40:5m42.21s on my systems), and you'd have half the vram available. Vram does not stack unless using NVlink/switch enabled cards which these aren't.

     

    milliethegreat said:

    Also is a p40 or m40 even worth it compared to cpu rendering anyway???? I'm running a Ryzen 5 4600 G stock fan cooled. I'm doing cpu rendering 

     Very much so.

    To put this in perspective, my 20c/40th sever(2x e5-2680v2) benchmarks at 42m 10.7s.

     

     

    You DO realize there's a 24gb version of the m40 right? 

  • milliethegreatmilliethegreat Posts: 288

    milliethegreat said:

    DrunkMonkeyProductions said:

    milliethegreat said:

    Soooo....m40 are no longer supported by iray for sure? I don't want to spend $160+ on a p40 if I can get a m40 for about $80. That's an astronomical difference in price when it comes to my very fixed budget.

    M40's are still supported, they're just deprecated. Which means any update to iray/DS could render them useless. Just check the update thread when a new version of DS rolls out to see if they've been dropped or not.

    I'm currently running a set of 4 on my render box with DS 4.22.0.16(current version as of this posting), on driver 551.61 on server 2019.

     

    I'd still suggest investing in the p40's instead, even with the additional overhead DS/iray is going to require for emulation of rtx. About 1GB more than rendering on the m40's.

    The p40 is also going to be faster than the m40. Newer architecture(pascal vs maxwell), more cores(3840 vs 3072 resp.) faster(948mhz vs 1303mhz), power draw is the same for each at around 250W max.

    You will get better performance with two m40's, but at double the power draw. you're still not going to get double the perfromance(benchmarking, P40:8m28.18s, m40:11m36.7s, 2xm40:5m42.21s on my systems), and you'd have half the vram available. Vram does not stack unless using NVlink/switch enabled cards which these aren't.

     

    milliethegreat said:

    Also is a p40 or m40 even worth it compared to cpu rendering anyway???? I'm running a Ryzen 5 4600 G stock fan cooled. I'm doing cpu rendering 

     Very much so.

    To put this in perspective, my 20c/40th sever(2x e5-2680v2) benchmarks at 42m 10.7s.

     

     

    You DO realize there's a 24gb version of the m40 right? 

    They're about the size specs and stuff just one has double the vram 

  • milliethegreatmilliethegreat Posts: 288

    PS. You have no idea how difficult it is to find a gpu (or anything I buy used) on eBay shipping something other than USPS or DHL. God do I DESPISE those two shipping carriers. They're the absolute WORST! At least around here they are. Sorry about the tangent. Just thought folks.

  • milliethegreat said:

    You DO realize there's a 24gb version of the m40 right? 

    Actually i'd forgotten that one even existed.

    oops.

  • milliethegreat said:

    If I get an m40 over a p40 (because I'd rather pay $80 as opposed to $160 if I can) how many months or years would I get out of it before it's useless for iray/DS?

    That i can't give you a difinitive answer on. As of the latest documentation on iray i could fined,february this year, maxwell's are still supported.

    The best i can do is guesstimate based on previous generations. With the previous time line, maxwells could be dropped as soon as sometime later this year.

    However, certain operating systems have already lost support.

    Windows 7, 8/8.1, and server 2016(depending on build number) have already lost support, as there aren't new enough drivers for them.

    I had to upgrade to 2019 on my systems to maintain support for the latest version of DS(4.22.0.16).

     

     

  • milliethegreat said:

    PS. You have no idea how difficult it is to find a gpu (or anything I buy used) on eBay shipping something other than USPS or DHL. God do I DESPISE those two shipping carriers. They're the absolute WORST! At least around here they are. Sorry about the tangent. Just thought folks.

     Sorry for your pain, i hate fedex, they keep not delivering, and saying they attempted delivery or delivering to my neighbor's house.

    I found several posts rather quickly that ship fedex or ups, a bit more expensive($95 for the 24GB with fedex).

    Here's a link

    https://www.ebay.com/itm/155113808954?epid=851104737&itmmeta=01HZ5KHCAE98N01GT0C6GNKQEJ&hash=item241d80e43a:g:n4oAAOSwpl5i9DOP&itmprp=enc%3AAQAJAAAA4PklIexR%2FI32hYLO7g8WoHoBrDtrR%2BEuixka0ya5eKp8DL9us%2BwjzlxyyJkGtID%2F2jaxvdTi7iTWVBt%2BKx6CB4LaQpYVpnHST7renI350wA07us0OHTqqFd8VNWjK5cqi28GwY%2BGK5wf6Ovc5qIYsKCcHY0MHi6q5PEBDWYQ5tv1748bmkJ%2FC9VF1cAQOFTG5BCLuSietBSBE68UChd%2FP5Vj1F9xTW2K%2BrzAoFIyl7KiCQq1RjWkr%2Bvf%2BIDzUUehb3x%2FOdAma6crE9MSTj2mYIh5hdO5VJHDlOib0QYEbI7l%7Ctkp%3ABk9SR6zFxbP5Yw

  • milliethegreatmilliethegreat Posts: 288

    DrunkMonkeyProductions said:

    milliethegreat said:

    If I get an m40 over a p40 (because I'd rather pay $80 as opposed to $160 if I can) how many months or years would I get out of it before it's useless for iray/DS?

    That i can't give you a difinitive answer on. As of the latest documentation on iray i could fined,february this year, maxwell's are still supported.

    The best i can do is guesstimate based on previous generations. With the previous time line, maxwells could be dropped as soon as sometime later this year.

    However, certain operating systems have already lost support.

    Windows 7, 8/8.1, and server 2016(depending on build number) have already lost support, as there aren't new enough drivers for them.

    I had to upgrade to 2019 on my systems to maintain support for the latest version of DS(4.22.0.16).

     

     

    My quadro is still supported (surprisingly) by my OS (windows 11 pro) and has (at least reasonably as of this year) and it's older chipset as the m40 (my quadro is Kepler. It's a quadro 410) so wouldn't m40 still be supported by my OS if running alongside the 410. That's my experience in the past hence I still have the 410. And the m40 is supported by DS for how much longer? If you have any idea?

  • milliethegreatmilliethegreat Posts: 288

    How much better or worse is cpu rendering on an i9 14900k than an m40 or p40 or ryzen 5 4600G? I have a 14900k laying around and am considering using that instead. I know it's not gonna be fast as gpu rendering and obviously not as fast as RTX but I'm wondering compared to what i have now (ryzen 5 4600g) or m40 or p40.

  • milliethegreat said:

    My quadro is still supported (surprisingly) by my OS (windows 11 pro) and has (at least reasonably as of this year) and it's older chipset as the m40 (my quadro is Kepler. It's a quadro 410) so wouldn't m40 still be supported by my OS if running alongside the 410. That's my experience in the past hence I still have the 410. And the m40 is supported by DS for how much longer? If you have any idea?

    Oh, this just got complicated. Unless you don't mind being stuck on DS 4.21.0.5 or earlier, you're going to need a new gpu for video output.

    It's not that the o.s. isn't supported, it's that iray doesn't support it. sorry if i wasn't clear on that.

    The latest driver for the 410 is 474.82, the minimum for iray, is 512.78.

    The 514.08 driver(the closest to iray requirement i could find for the m40) doesn't even list the 410 in the listdevices.txt file, so it's not supported by that driver.

    About the cheapest supported option i could find was the k620, at ~17 usd on ebay.

    It's a k prefix, but it's maxwell.

     

    I had to retire several gpus because of this.

    Rip my nvs 510's, my k2000d, and my p106-100.

     

     

  • milliethegreatmilliethegreat Posts: 288

    So a k620 and a m40 and I'm good for iray? And for how long?

  • milliethegreat said:

    So a k620 and a m40 and I'm good for iray? And for how long?

    The k620 will be viable till nvidia drops maxwell support for your operating system, no clue when that'll happen.

    The m40 will be viable till iray drops maxwell support, i have no idea when.

     

    If you want to not have that to worry about, the k620 and a p40 would be good for possibly a few more years.

     

  • outrider42outrider42 Posts: 3,679

    You can make guesses based on the past history of when Nvidia ended driver support. Kepler was ended a little while back, and the next line is Maxwell. It is only a matter of time. To be fair, Maxwell is a full 10 years old now. Supporting GPU hardware for over a decade is quite a feat. Kepler launched in 2012, and the final update was in 2021. It got one last security update in 2022. The previous GPU line, called Fermi, ended support in 2018. Fermi was first released in 2010, so it only had 8 years of updates (not counting a couple security patches). So just going by their history, I think it should be clear that Maxwell is nearly at the end of its line. Maybe it will work for a few months, or maybe the very next driver will be the one that ends support. Nobody knows. Only someone inside Nvidia might know.

    Studio Drivers do not exist for Maxwell. You would be stuck only using Game Ready drivers. You can certainly use Game Ready drivers fine, that isn't the issue, rather this shows that Nvidia isn't looking to support Maxwell for much longer.

    At any rate, buying a Maxwell based GPU right now is extremely risky. You would be taking a serious gamble, and even in the best case, you would be looking at a limited time frame. I don't think it is worth it.

    Pascal based cards should still have a couple years left, or maybe even more. The GTX 1060 is still ranked in the top FIVE on the Steam Hardware survey. This survey is actually pretty important. Game developers try to target their performance to the more popular GPUs on this survey as that can help lead to more sales because more people can play the game. Nvidia certainly looks at the Steam survey as well. CEO Jenson Huang has mentioned it numerous times. The point being, with Pascal still being so popular, Nvidia will be more likely to keep it going a solid 10 years as well. That is no guarantee, of course. 

    BTW, the GTX 970 and 960 are still ranked in the top 50 on the Steam survey. But they are in the lower half, and dropping more and more. Though funny enough this month shows a tiny 0.01% increase in usage, but that is likely margin of error as the survey is random so the numbers do shift sometimes. I believe the once very popular 670 was still on the survey ranking in the bottom half when Kepler was killed off, too.

    You can still use the hardware if support ends, but you would have to stop updating Daz Studio.

    I know it is tough on a fixed income, but it would be far more advantageous to bite the bullet and buy a 3060 instead. That may not be what you want to hear, but I just cannot recommend Maxwell in 2024. The 3060 will be supported for years to come, but more than that, it will use a lot less power while being much faster. The Titan X Maxwell uses 250 Watts, while the 3060 only uses 170 Watts. Depending on how much you pay for electricity, that can lead to some savings. It can save more than you think, because it is so much faster, too, because it can perform the same work is far less time. We do not have updated benchmarks for the Titan X, but it could only do 2.33 iterations per second in this test. The 3060 hit 6.6 iterations per second on a newer version of Iray. That is pretty huge.

    So the 3060 is about 3 times faster than the Titan X, while also using less power. This fact means that instead of rendering for an hour on the Titan X, it might only take 20 minutes on the 3060. That actually ends up saving quite a of bit of power because the work is done so much faster. Plus the way RTX works, the performance gap actually gets wider as the geometry gets more complex, meaning there can be scenes where the 3060 is a lot more than 3 times faster. Dforce strand hair will run much faster on RTX, we had a thread exploring that as well, and the RTX cards were many times faster than GTX cards. Plus if you buy a new 3060, it will have a warranty on it. However, you can buy a used one, too if you can find one for a deal.

    There is a 2060 12GB. That would be another option if it is cheaper. This variant is a bit more rare, though, and as such is still strangely high in price in places. Based on Turing, it should get at least 4-5 years of support. It is not nearly as fast, though, hitting around 4 iterations per second in the benchmark. Still faster than the Titan X, but the 3060 would be better value.

    Why do you have a 14900k lying around? The Intel 14900k will probably not be as fast as the Titan X Maxwell. But we don't have a benchmark for one, as far as I know. It might actually be around as fast. You can try asking if anybody has one and can test. The benchmark thread is in my sig. If you have a 14900k, you have a bargaining chip. The 14900k can be sold for $500.

  • milliethegreatmilliethegreat Posts: 288

    outrider42 said:

    You can make guesses based on the past history of when Nvidia ended driver support. Kepler was ended a little while back, and the next line is Maxwell. It is only a matter of time. To be fair, Maxwell is a full 10 years old now. Supporting GPU hardware for over a decade is quite a feat. Kepler launched in 2012, and the final update was in 2021. It got one last security update in 2022. The previous GPU line, called Fermi, ended support in 2018. Fermi was first released in 2010, so it only had 8 years of updates (not counting a couple security patches). So just going by their history, I think it should be clear that Maxwell is nearly at the end of its line. Maybe it will work for a few months, or maybe the very next driver will be the one that ends support. Nobody knows. Only someone inside Nvidia might know.

    Studio Drivers do not exist for Maxwell. You would be stuck only using Game Ready drivers. You can certainly use Game Ready drivers fine, that isn't the issue, rather this shows that Nvidia isn't looking to support Maxwell for much longer.

    At any rate, buying a Maxwell based GPU right now is extremely risky. You would be taking a serious gamble, and even in the best case, you would be looking at a limited time frame. I don't think it is worth it.

    Pascal based cards should still have a couple years left, or maybe even more. The GTX 1060 is still ranked in the top FIVE on the Steam Hardware survey. This survey is actually pretty important. Game developers try to target their performance to the more popular GPUs on this survey as that can help lead to more sales because more people can play the game. Nvidia certainly looks at the Steam survey as well. CEO Jenson Huang has mentioned it numerous times. The point being, with Pascal still being so popular, Nvidia will be more likely to keep it going a solid 10 years as well. That is no guarantee, of course. 

    BTW, the GTX 970 and 960 are still ranked in the top 50 on the Steam survey. But they are in the lower half, and dropping more and more. Though funny enough this month shows a tiny 0.01% increase in usage, but that is likely margin of error as the survey is random so the numbers do shift sometimes. I believe the once very popular 670 was still on the survey ranking in the bottom half when Kepler was killed off, too.

    You can still use the hardware if support ends, but you would have to stop updating Daz Studio.

    I know it is tough on a fixed income, but it would be far more advantageous to bite the bullet and buy a 3060 instead. That may not be what you want to hear, but I just cannot recommend Maxwell in 2024. The 3060 will be supported for years to come, but more than that, it will use a lot less power while being much faster. The Titan X Maxwell uses 250 Watts, while the 3060 only uses 170 Watts. Depending on how much you pay for electricity, that can lead to some savings. It can save more than you think, because it is so much faster, too, because it can perform the same work is far less time. We do not have updated benchmarks for the Titan X, but it could only do 2.33 iterations per second in this test. The 3060 hit 6.6 iterations per second on a newer version of Iray. That is pretty huge.

    So the 3060 is about 3 times faster than the Titan X, while also using less power. This fact means that instead of rendering for an hour on the Titan X, it might only take 20 minutes on the 3060. That actually ends up saving quite a of bit of power because the work is done so much faster. Plus the way RTX works, the performance gap actually gets wider as the geometry gets more complex, meaning there can be scenes where the 3060 is a lot more than 3 times faster. Dforce strand hair will run much faster on RTX, we had a thread exploring that as well, and the RTX cards were many times faster than GTX cards. Plus if you buy a new 3060, it will have a warranty on it. However, you can buy a used one, too if you can find one for a deal.

    There is a 2060 12GB. That would be another option if it is cheaper. This variant is a bit more rare, though, and as such is still strangely high in price in places. Based on Turing, it should get at least 4-5 years of support. It is not nearly as fast, though, hitting around 4 iterations per second in the benchmark. Still faster than the Titan X, but the 3060 would be better value.

    Why do you have a 14900k lying around? The Intel 14900k will probably not be as fast as the Titan X Maxwell. But we don't have a benchmark for one, as far as I know. It might actually be around as fast. You can try asking if anybody has one and can test. The benchmark thread is in my sig. If you have a 14900k, you have a bargaining chip. The 14900k can be sold for $500.

    You do realize that a 3060 is like almost $300 if not $400. I don't even get that discretionary amount in three months! Forget today! That's FAR out of my price range. And that's for the 8gb! The 12gb is much more expensive and still not enough for scenes with Vdb emissives and large textures and imported hi poly assets! Or Ultrascenery2! That's why I'm dead set on m40 or p40! They're 24gb and I can afford them!

  • milliethegreatmilliethegreat Posts: 288

    milliethegreat said:

    outrider42 said:

    You can make guesses based on the past history of when Nvidia ended driver support. Kepler was ended a little while back, and the next line is Maxwell. It is only a matter of time. To be fair, Maxwell is a full 10 years old now. Supporting GPU hardware for over a decade is quite a feat. Kepler launched in 2012, and the final update was in 2021. It got one last security update in 2022. The previous GPU line, called Fermi, ended support in 2018. Fermi was first released in 2010, so it only had 8 years of updates (not counting a couple security patches). So just going by their history, I think it should be clear that Maxwell is nearly at the end of its line. Maybe it will work for a few months, or maybe the very next driver will be the one that ends support. Nobody knows. Only someone inside Nvidia might know.

    Studio Drivers do not exist for Maxwell. You would be stuck only using Game Ready drivers. You can certainly use Game Ready drivers fine, that isn't the issue, rather this shows that Nvidia isn't looking to support Maxwell for much longer.

    At any rate, buying a Maxwell based GPU right now is extremely risky. You would be taking a serious gamble, and even in the best case, you would be looking at a limited time frame. I don't think it is worth it.

    Pascal based cards should still have a couple years left, or maybe even more. The GTX 1060 is still ranked in the top FIVE on the Steam Hardware survey. This survey is actually pretty important. Game developers try to target their performance to the more popular GPUs on this survey as that can help lead to more sales because more people can play the game. Nvidia certainly looks at the Steam survey as well. CEO Jenson Huang has mentioned it numerous times. The point being, with Pascal still being so popular, Nvidia will be more likely to keep it going a solid 10 years as well. That is no guarantee, of course. 

    BTW, the GTX 970 and 960 are still ranked in the top 50 on the Steam survey. But they are in the lower half, and dropping more and more. Though funny enough this month shows a tiny 0.01% increase in usage, but that is likely margin of error as the survey is random so the numbers do shift sometimes. I believe the once very popular 670 was still on the survey ranking in the bottom half when Kepler was killed off, too.

    You can still use the hardware if support ends, but you would have to stop updating Daz Studio.

    I know it is tough on a fixed income, but it would be far more advantageous to bite the bullet and buy a 3060 instead. That may not be what you want to hear, but I just cannot recommend Maxwell in 2024. The 3060 will be supported for years to come, but more than that, it will use a lot less power while being much faster. The Titan X Maxwell uses 250 Watts, while the 3060 only uses 170 Watts. Depending on how much you pay for electricity, that can lead to some savings. It can save more than you think, because it is so much faster, too, because it can perform the same work is far less time. We do not have updated benchmarks for the Titan X, but it could only do 2.33 iterations per second in this test. The 3060 hit 6.6 iterations per second on a newer version of Iray. That is pretty huge.

    So the 3060 is about 3 times faster than the Titan X, while also using less power. This fact means that instead of rendering for an hour on the Titan X, it might only take 20 minutes on the 3060. That actually ends up saving quite a of bit of power because the work is done so much faster. Plus the way RTX works, the performance gap actually gets wider as the geometry gets more complex, meaning there can be scenes where the 3060 is a lot more than 3 times faster. Dforce strand hair will run much faster on RTX, we had a thread exploring that as well, and the RTX cards were many times faster than GTX cards. Plus if you buy a new 3060, it will have a warranty on it. However, you can buy a used one, too if you can find one for a deal.

    There is a 2060 12GB. That would be another option if it is cheaper. This variant is a bit more rare, though, and as such is still strangely high in price in places. Based on Turing, it should get at least 4-5 years of support. It is not nearly as fast, though, hitting around 4 iterations per second in the benchmark. Still faster than the Titan X, but the 3060 would be better value.

    Why do you have a 14900k lying around? The Intel 14900k will probably not be as fast as the Titan X Maxwell. But we don't have a benchmark for one, as far as I know. It might actually be around as fast. You can try asking if anybody has one and can test. The benchmark thread is in my sig. If you have a 14900k, you have a bargaining chip. The 14900k can be sold for $500.

    You do realize that a 3060 is like almost $300 if not $400. I don't even get that discretionary amount in three months! Forget today! That's FAR out of my price range. And that's for the 8gb! The 12gb is much more expensive and still not enough for scenes with Vdb emissives and large textures and imported hi poly assets! Or Ultrascenery2! That's why I'm dead set on m40 or p40! They're 24gb and I can afford them!

    I live with a provided by my parents for everything. I live off SSI for things for myself like my computer hardware 3d assets and stuff so luckily I don't have to pay utilities internet or anything like that. So I'm thankful for that. So I'm kinda stuck with cpu rendering forever I guess. Oh well at least I thought I'd ask.

  • milliethegreatmilliethegreat Posts: 288

    It'd be nice if DTU supported geometry shells and stand based hair and fur. Those are the only reasons I'm dead set on using Daz for my renderings. Otherwise I'd be doing ALL my renderings in UE which I can do on cheaper gpus than a 3060 and doesn't require a ton of vram for my needs. 12 to 16 is usually enough and UE 5 supports as early as GCN 2.0 Maxwell Nvidia and also Intel arc. I have one of each in three desktops one per. And a Titan xp or is in my price range 

  • PerttiAPerttiA Posts: 10,024

    milliethegreat said:

    You do realize that a 3060 is like almost $300 if not $400.

    No, it isn't.

    At the very moment, by just going in to a online store, I can buy an Asus Dual Fan 3060 12GB here in Finland for 319 eur which includes 24% VAT and a few weeks back it was on offer for 249 Eur (including 24% VAT), so I would assume one can buy one in the states for something like $200-$250 brand new.

  • milliethegreatmilliethegreat Posts: 288

    They over $260 here. I can't afford that without draining my entire months discretionary budget. That's not including tax. And 12gb I seriously doubt is enough. Sooooo is the best option at this point the p40 and k620 used. That's affordable for me.

Sign In or Register to comment.