Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
She still has a halo - I'd be interested how that can be avoided. I don't have Final Cut so I use DaVinci Resolve (free version) but I have yet to try any green screen work. I like what you are attempting here.
Yeh looks much better; he walk/look around animation is very smooth and believeable
Tried a terrain render, came out looking like ass lol.
I may need to feather the edges of mask a bit more in FCPX when I remove the green screen and make it transparent. Here is a new version with the image plane shifted camera left and back a bit. I also corrected the path of the drone that comes in as it intersected with one of the poles. The actress' right elbow now goes behind a pole and she is casting a proper shadow. The source green screen is not very clean and quite often she goes in and out of focus.
Yeh, much improved.
@TheKD
There seems to be a lot more light that needed?
I think I royally screwed up the texture export and material creation part lol.
This still needs work but it's the first that I'm even moderately happy with.
Has anyone else notice that textures on clothing look WAY better in Blender? I think it's how it handles the normal maps, maybe?
I have a full AMD build, so wanted to give blender, and more specifically Radeon Prorender a try. Scene was imported into blender 2.9 with nightly diffeomorphic addon from yesterday. Renders done in Cycles are straight up without any modifications done after importing. They look alright except for the eyes.
Iray renders for reference
Cycles (No changes to materials after importing. Adaptive sampling on, max samples and noise threshold set to be the same as Iray.)
Prorender, no changes. Oh wow, things are messed up!
Prorender, after switching skin, eyes, hair and cloth to its uber shader. Settings are not completely identical to iray, I mainly just wanted to get it working first before tweaking the shaders.
The Iray uber shader maps kinda straightforwardly to the prorender uber shader. It would be awesome if diffeomorphic could someday have the option to create blender materials aimed at Prorender.
@asdow123 As for diffeo you can use the principled option for iray materials. This will map to the principled shader that prorender should read straighforward. So you can use both eevee and cycles and prorender without changing the materials.
Hey, thank you, I did check that out as well, but it's not quite what I was after with that comment. Even though the principled BSDF works out of the box for Prorender, the way diffeo imports daz assets results in shaders that require further tweaking to look good, especially with Prorender. What I'd like is for the addon to have a decent material setup aimed at Prorender, similar to Cycles with the BSDF import selection, so that manual material tweaking is minimized.
I actually took a look at the addon's node setup code and added Prorender uber shader support to it. After one evening of work, it works alright for a base G8 character. The eyes still have to be tweaked after importing to look good and there are a ton of corner cases that are not taken into account yet. If it's not too hacky and Thomas deems it good enough, maybe it can be added to the master one day.
Prorender using RPRubershader. Eye materials needed a couple quick tweaks after importing, but otherwise all the materials are untouched.
And again, Iray for reference
This sounds great. Please open an issue at diffeomorphic and attach your code. So that Thomas can evaluate it. May be you can also explain the rules you followed to do the conversion so that interested people may contribute to get it better.
https://bitbucket.org/Diffeomorphic/import_daz/issues
Looking great! If you brought her down by a little bit, would she be on the ground or does it still look like she's floating? But once her feet aren't in view, it all blends together fantastic! I'm digging it and can't wait to see more of what you do with it! I'm also trying to learn compositing with the green screen in Blender, and my biggest fear/concern is the look of floating, so I am very excited and curious to hear how you get it to work.
None of the shaders in Studio treat cloth as cloth.
A good way of seeing how cloth behaves is to look at velvet in real life; it's not just about shadows and shading within those shadows, but the angle that the eye or camera are looking at coupled with light. It's because cloth isn't a solid material, but an apparently solid material made of of fibres woven together, which are then woven to create the cloth.
This is an excellent tutorial for creating cloth. It's possible to get the same basic effect with the Principled Material, which the turotial demonstates, but the shader created allows for various options.
Awesome! lol
It is just a WIP. Trying the new version of the DAZ plugin.
https://bryonlape.com/wp-content/uploads/2020/09/Vinette-01.png
Switching gears a bit and tried the Space Living Room. Really nice default conversion with Diffeo and Cycles render with Denoising on.
Both those look good.
Most of my render tests have been softball tosses. Decided to toss it a hardball tis time lol. Full environment, totally enclosed, only lit by a few emissive surfaces. It's kinda poly heavy at just shy of 4 million polygons. My first try, was using optix, hard crashed blender lol. Second try, optix, but used simplify with textures set at 2k, didn't hard crash, but gave me a cuda memory error. Is cysles like iray, once it fails you should close and reopen I wonder? I closed it, reopened and tried optix again with 2k map on simplify, and it started render, using a lot less memory. The first failed try at it, it was over 8000MB, this time it didn't even hit 3000MB memory. I used e-cycles real world preset(the slowest one), rendering at 5760 x 3240 resolution with 512 samples. First try, bit of a lighting fail on my part lol. Can't even see the elf other than silhouette and a few highlights, I figured it would be dark, but not quite that dark lol. Render took about 25 minutes to complete.
Second render, same settings, just moved my elf to be in the light made me realize why she is so dark haha. Must have that normal map UV error, not a lighting problem doh. Looks like a drow elf lol. Yep, a lot of material zones to fix, but was the UV for normal map error. Oddly enough, all the environment maps had the same error, but it only made the water go black as far as I can tell. That render took 29 minutes to complete. Here we go, round threee, fight!
Well, first observation as the render is going, correcting the normal maps seems to have turned some lamps on lol. Brick walls look a bit rougher and less flat, water looks more like water than the black abyss before.Will have to delete the wall lamps and give it another render, see what the difference is. This render took 39 mins, weird, more light, figured it would take less time.
For this render, deleted all the wall lamps, and put smooth shading on all the walls and bridge. Hopeing to get a bit better looking edges, those edge bevels look kinda bad lol.
Well, not horrible. The edge bevels didn't clear up any, the hair, the skin on her face, and eyes looks a bit funky. It kinda pooped out denoising those last few tiles on the right, not sure why, but I stopped it at 42 minutes, it was like 15 minutes of not doing anything.
Decided to do one more render, at fast preset, see what that looked like, still 512 samples. Seems to show some haircap pokethrough there, otherwise, not a horrible render for 11 minutes lol.
I hybrid render and the occasional time it kicks up a fuss I do close down
Especially at the resolution you have done it at.
Considering how good the intel denoiser is (no idea on nvidia in blender), but it is far superior to what I could use in Iray, I'd try 128 samples and the denoiser.
My compositor setup; I saved the start up file so it is part of the default file; if you don't want the defocus part, just take the Image output from the Denoise node and plug into the Composite Node (and Viewer Node if required).
If you use Nvidia's new denoising, what's it like with skin closeup?
@TheKD As for dark scenes they always converge slow in any pbr engine. Then cycles has the advantage of a very good and fast denoiser. Also you can adjust the exposure in the color management. Again cycles has the advantage that you don't need to re-render to change the exposure, or any other CM parameter.
I still got the blend handy, so can try a few more today. The mesh is far too dense to work with in edit mode lol, anything I try to do lags by like 3 seconds, and does big jumps, even holding the shift key. Did the bast I could to fix the haircap by using scale on the object mode, moving in .001 incriments. Not sure why it was so visible in fast mode, maybe it has to do with how many bounces it sets in the preset or something. Still rendering huge, click for full view popup. 128 samples very fast, took 2 minutes 43 seconds. Looks a lot better than I thought it would with such a dark scene. One weird thing though, I had to render 3 times, the first two times I got a few render tiles that were pure black, like they decided not to render that spot, each time they were in different places in the image. Anyone know what would cause something like that?
OK, wow, I just noticed I been using cuda this whole time lol. I guess that is why it's been doing 64 x 64 tiles. I was going to check just my 2080 super and see if my 1070 was slowing my down. Might as well test cuda vs optix with both, then optix with one at this setting, since it's pretty fast.
Optix with both cards finished in 3 minutes 1 secod, still using 64 x 64 render tiles, I think. Same size tile as before, pretty small. Don't see any visual difference.
Seems like the 1070 is speeding it up a little bit, finished at 3 minutes 22 seconds with just the 2080. The tile size was still small though, I coulda sworn I heard larger tile size was faster in cycles. I am using auto tile siz, should try a larger manual tile size see if it makes a difference in time I suppose. Not gonna keep posting the same render result though lol. OK, I guess the large render size made me a bit off on my tile size estimate lol. I put it at 128, and the tile was smaller. With just the 2080 super, time finished at that setting at 3 minutes 31 seconds, with both it finished at 3 minutes 13 seconds. With a 1024 tile size with both gpu it too 4 min 10 seconds, with just the 2080 super it finished in 4 min 23 seconds. So it seems to me, most effecient is actually both gpu with auto tile size. Makes things easier for me lol. Moving on to medium quality preset, with 128 samples, autotile size on.
Medium quality with 128 samples took 5 min 9 seconds to complete.
High quality with 128 samples took 8 minutes 25 seconds to complete.
Physically correct with 128 samples took 12 minutes 39 seconds.
Unfortunately for the high and best quality setting, it wouldn't accept the upload, until I took them into PS and lowered quality to 10 instead of 12 like the reast. Not sure why, but that might have had a small impact on the quality. Here is a side by side from fast to best quality settings, so none of them have gimped quality settings. I think the weird blotchyness look on her skin might be caustics, so going to move her a bit back from the river, and do the more of a portrait view render series.
256 for just GPU is usually (not always) the best; also, it is well worth checking Adaptive Sampleing. Checking that can also make 128 faster than 256, although my advice is do some text runs with just a few samples (32 perhaps).
Oh yeah, adaptive is checked and set to auto on all my tests. Closeup of elf tests will also have that on. Render one fast preset, 32 samples took 2 min 6 seconds
Medium preset at 32 samples took 2 min 54 seconds
high preset at 32 samples took 4 min 30 seconds. I think I musta screwed up the import, or maybe materials didn't convert properly.
She is nowhere near the water where that blotchyness of skin could be caused by caustics like I thought lol. And the eyes are looking bad. Gonna have to try another export import. Will have to wait till after supper. My method was to save file, export high resolution with difeo, export at current resolution(everything subd 2 that had subd applied) with sagan. Import with diffeo, delete the diffeo import, then import with ABC. Anything spot anything wrong with that workflow?
OK, most of that weird blotchy skin, was being cause by the emissive fire light thingies I think. I decided to hide them, and just add a plain old area light and see what heppens. With the alembic, I was getting that zbrush polygon look, you know when you import a g8 figure then subdivide a few time, and it's all hard polygons that you gotta run the smooth brush over? Looked like that. So I deleted that, and reimported the diffeo hd figure, got rid of the non-hd stuff, doh, the sword and board must not have HD on it, they vanished. Decided to render anyways, see what it looked like. No blotchy skin, no hard polygons, that is a plus. But that visible haircap is no good. I am fairly certain I used uhd2 shaders on the cap and hair. Maybe that haircap shader has issues? Anyone else used it before? Here is medium preset, 64 samples that took 3 min 41 seconds(I think it had to reload kernel for some weird reason), and 128 samples took 3 min 8 seconds.
Then bumped it up to high preset at 64 samples finished in 3 min 41 seconds. 128 samples finished in 5 min 3 seconds(forgot to save an image of that, doh). 256 samples finished in 8 min 21 seconds. 512 samples finished in 15 mins 24 seconds. 1024 samples took 26 min 58 seconds. All in all, I am still pretty stoked I can render that big at all, 1024 samples might not even be enough for a detailed portrait. I really gotta dig into that material masterclass so I can maybe fix things like that visible haircap easily myself, maybe even improve on some materials.
Those look decent. Also looks like your haircap is missing the alpha chanel, if it's there, unplug and plug in. Also make sure it exists.
Looks like it does have one, replugging it didn't help. But looking at the image name, it appears I might have forgotten to slap the uht2 haircap shader onto it.
Well, took a look at the duf file, it's not that I forgot to apply the shader, it's that the scalp shaders just won't apply to it.
Create your own quick shader. Whichever you are using create a new (example) cycles shader; keep the other as a reference but reuse the cycles/eevee Material Output
Use a principled shader
Shift A > Shaders > Principled BSDF then add a texture and find the scalp you need; you shouldn't need the coordinates or the mapping, but just in case I'd add them.
Plug it into alpha and change the base colour perhaps to something approprite. It's only a test.
in edit mode (if that doesn't solve the issue in part at least) select the scalp geometry, open a UV window and make sure it is unwrapped in a believable shape (second image).
First, I was able to edit the hair in isolation mode without such crazy lag. So I deleted the haircap altogether, didn't look quite right I don't think. Hairline looks pretty funky to me. making a new bsdf wiring the transmap didn't make the haircap go away either. Now I remember why I hate daz hair so much lol. Even in studio it is usually what I have to spend the most time mucking around with to get looking ok.
Find a cap you like.
Export it from studio as an obj, import in Blender and check that the material is correct; I've done that at times; then just parent it to the head bone. Select Cap, then Head Bone. Ctrl P: Choose Bone from the list that appears. If Bone doesn't appear, make sure Bone is selected and that the rig is in pose mode.
Ugh, tried another hair, seems like I am getting the same kind of issue. Sagan started giving me a generic error, trying to move the diffeomorphic version is too slow and laggy. Guess I should just stick to crappy low resolution slow iray renders.