Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Something that has been hinted at but not really addressed outright is the whole 'exr' and post processing. At a certain point in an artists development, post processing offers a wealth of advantages and is well worth the time spent learning about. I believe that post processing will play a growing role of importance going forward as the tools advance and as people's expectations continue to grow in the end quality. There is so much there that is really like trying to explain color to someone who's only experienced black and white. When one gets to the point it is feasible to learn about, jump in .... it's worth the swim.
Edited...
Meanwhile, Totally off topic, but too cool to not post, this video on 15 Reasons Why The Future of Video Game Graphics Is JAW DROPPING is really fun and I'm interested in what people think of it.
It is on topic in the sense that DAZ, Blender, and the artists (us) all have to keep one eye on where things are going. ;)
As someone whose eyes glazed over at all this filmic whatever talk, I mostly skipped over the conversation about it~
However I just happened to stumble across this video (which coming back to post it, I saw Gedd linked further up on the page) and highly recommend it. It does a great job talking about photorealistic rendering in normal human language - and I assume it'd be useful in a general sense for other renderers too.
This is what I'm using currently, actually. The pure filmic can be a little sterile for my tastes.
See this is why Andrew Price is way to hype-y for me. Filmic is a view transform It cannot in and of itself make things converge faster. More useful visual feedback can make it easier to use more realistic values which can in turn make things converge faster, but that is a very different thing.
No worries we're probably 99% in agreement, and I do rather like politely debating details.
I think the main thing that I have been trying to get across is that Filmic does not change any underlying data merely how it is presented in the viewport (or exported in 32bit formats) It really doesn't help that a lot of the laguage has multiple meanings, especially when it comes to any hdr related terms (all of which have completely different meanings in a photography context)
I'm probably going to regret even asking this but isn't the main point to using filmic to get a better light range not to magically speed renders?
That was my understanding too - that it has two main purposes: a better light range and a more realistic reaction by colors to extremes of light. I've watched probably five or six videos and read threads and blog posts on this in several places and the post(s) above are the first time I have seen anything even mentioned about the speed an image converges at. If anything you would expect it to be slower because to get the full result you need to turn off any direct/indirect light clamping.
Yes this is a good point to stress. Any benefits from Filmic really does come from changing the workflow rather then the underlying mechanics, but it does serve as a good example of how much changing a workflow can impact overall useability for a large percentage of the users (artists in this case) and that was really the heart of my point.
&&
You would think so, but I've seen multiple examples of where convergence at 2000 samples without Filmic happened at 500 samples with. I haven't sorted out the underlying aspects of why this is happening yet but it's been a reaccuring theme that I've seen multiple times now. This goes back to my original statement of try it out. I haven't had the time to put it through extensive enough paces to figure out the underlying mechanics of what/why but in the couple quick test renders I did, I also found this to be the case. I think basically it comes from making better decisions when designing the lighting for the scene. If one can see more realistically the results of the lighting then we are more likely to light the scene properly. Basically, with a somewhat skewed view into our scene that the traditional Blender environment gives us we are left with adjusting for things incorrectly. There is a maxim, poor information leads to poor decisions.
(A side note, the more realistic reaction by colors to extremes of light can also be fixed in post, but it does require a certain knowledge set and taking the time to do it. This type of task is the perfect example of when something should be automated in most cases to be the default.)
Unrelated to the above points but worth reiterating... Filmic not only gives us a better view into the underlying scene information that is to vast to see properly with our screen's limited colorspace, but it cooks down that information in render to something that overall is more closely aligned with what we would see in the real world, working with real cameras, which for many will be the end of the process. That is, they won't bring a full range exr into a compositor to tweak out what is there but being shown totally incorrectly*. An example of this is that I don't worry if my color temperature is set correct too much when trying to capture something quickly with a dSLR because I know I can change the white balance in Lightroom afterwards, but for many people this is not something they do or if they do, something they can do with confidence and proficiency. And, like in Blender, the default is to cook it down (to a jpg) on most cameras so it's not even something that can properly be changed afterwards for the average person.
* I don't do this either and I'm guessing almost no one does. After all, this would require post working every test render to see what the image should look like, which just doesn't make sense. With a dSLR image, I might know an image doesn't look great in the viewfinder but that I can work it in post to what I want, but I, and I'm guessing relatively few have the same comfort level with a render that they will bring into post.
This is somewhat like shooting with a high end video camera nowadays. They tend to shoot flat, which gives this sterile, somewhat flat look. This is on purpose as it is meant to be a full range flat base from which the colorist can overlay whatever effects s/he might want in a consistent manner. For anyone who's worked with filters on their photos and wondered why they don't get the results they expected, this is the cause, not starting from a neutral flat base.
What I believe will be where the future is headed is that the tendency (at least in anything other then pure lowest level consumer products) will be to work to this neutral base and have effects filters one can sort through as it is the only way to really get anything close to consistent results with effects filters. Furthermore, if I were to guess, it would be that even basic consumer level products will work to this base then overlay effects, they would just hide the flat base from the consumer as it would just be unwanted information, whereas higher end products would expose this to allow semi-pro/professional users to evaluate the base they are working from. But, whether hidden or visible, getting to that flat neutral base will be an important middle step in developing consistency of results.
As a matter of interest, are you guys working with content from DAZ Studio in Blender or are you talking about stuff produced in Blender itself? If the former, how are you transferring the content? Casual's script? And do you then set up your materials within Blender using the node system?
I ask because I tried Casual's script some time ago - only briefly - and found that I had to start tweaking the materials. When faced with the node system I balked and backed away. Bearing in mind the title of this thread, I wonder what the "not hard" procedure might be. I also wonder what the payoff might be - what does rendering in Blender give you that IRay doesn't?
EDIT: also, the major drawback I found with using that script was that if I wanted to tweak the poses, I had to return to DAZ Studio to do so. Then export the whole thing again and start over. I just didn't see the point.
Your question is an excellent one and covers many points but the first of which is the question of why do it, since there is a context switch cost, that is, the cost of redoing a lot of work in a different environment and the associated learning costs. The problem with this question is it goes very deep and wide and the answer will vary for any given situation.
As to the benefits to rendering in Cycles, there are many as there are in IRay. They are different environments with different strengths and weaknesses. If bringing anything into Blender for rendering, we really want to redefine the materials from scratch, which does have a learning curve. There are many materials available for free and the community is vibrant so getting through that learning curve is much easier in a Blender environment then almost any other competing environment. The main advantage is much more flexibility in what we can achieve. How/why is a long discussion but suffice it to say that one of the costs of IRay is a somewhat structured environment compared to a rendering environment like cycles.
Exporting from DAZ to Blender for an animation is not something I would recommend before someone has a pretty good handle on bringing still pose/object scenes over and rendering as that is just an added complexity on top of the learning curve of getting good results for a still scene. As to animation import/export for any package usually it is done in either one of two ways, alembic or total rerig. In some cases there is a third way which is to render out the animation with an alpha and bring it together in a compositor but that has it's own issues and learning curve. It is well worth checking out however as there are some serious savings of time etc in being able to composite results from different environments into a final piece.
In the end, exporting to Blender is a commitment in time and learning but the payoff is being able to do many things that would be hard or impossible in the DAZ environment.
There is another way of doing animation which I think will be more feasible for many people and that isn't Blender. It's exporting to something like Unreal and using Unreal's sequence editor tools. These have gotten so strong that it ends up every major Hollywood studio has announced a project to use Unreal for visualization (according to Epic's CEO at GDC 2017.)
I mostly use Blender to model things for rendering in Carrara actually (I like modeling in Carrara, but for more complex items, Blender just has a better toolset and is faster for me) but on the occasions when I have used DAZ products in Blender I have used Casual's script and really like it. You're right that I do have to tweak many materials, but not as much as you would think except for things very close to the camera. Using the script does the heavy lifting of that work by setting up simple materials and loading the maps for you which is always the painful part of importing a model from another program.
As for posing, I agree with you - that can be a pain. All I can say is try to get the pose right in DS first, remember that you don't have to re-transfer your whole scene - you can do just the character, and for very minor adjustments practice with proportional editing in Blender's edit mode. I use the editing all the time to make minor changes to pose, expression, clothing/hair movement, etc.
The advantages for me are not necessarily in Iray v Cycles (althought Cycles does render much faster on my machine at similar quality settings - hard to compare this directly, but going by just eyeballing the results side-by-side) but the extra tools available in Blender that DS doesn't have (yet?). The advantages are the same things that I make me prefer Carrara over DS:
If you look in my galley here, I've got few images rendered in Blender using DAZ assets and I think the overall quality is comparable to Iray, but some of things I did would not have been as quick and easy in DS:
Thanks Gedd. What it comes down to for me is that I'd be prepared to give it more time if I could pose easily in Blender. I guess that DAZ and Smith Micro are pretty worried that one day someone will write a plugin that makes the rigs compatible or whatever needs doing to make it simple to pose DAZ figures in Blender.
Well, there you go... as I mentioned, will depend on who you ask.
Those are some excellent examples. Thanks MDO2010 for your examples as they provide a good counterpoint to what I was presenting. :)
If doing a still scene, it doesn't need a rig really as it can be exported pre-posed, and, if there is a rig it doesn't need to be a perfect port if one isn't animating. If one is animating, the rig from DAZ wouldn't be sophisticated enough for many people as most animators would want more facial controls at the least. Also, many parts of animation depend on blend shapes/shape keys (different names for the same thing in different software really) which don't port directly between applications at the moment and so would have to be recreated in the receiving software. Again, a bit of a learning curve.
This brings up a good point. When working with multiple applications in a workflow, we really nead to identify which software is best for what aspects of our workflow and make good decisions on the most efficient way to port between the applications. This is the part that will be different from environment to environment (studio to studio) but it is important to identify and work out the details. We don't want to put extra work into developing something that will just get thrown away later in the workflow if we can avoid it, and when we really do need some materials on at one stage for being able to visualize, but those materials will get thrown away (for instance) then we want to take into consideration what will port over and how detailed we really need the parts that will get thrown away.
One of the reason Substance is becoming such a big deal in the 3d community is that the materials developed through substance can be used with some level of consistency (if properly planned) in different software, which means more consistency in this area of our workflow.
LOL - thread moved on while I was writing my novel.
Is anyone else trying to play catchup on the videos from GDC 2017 btw?
I'm posting ones as I go through them that I think stand out on my Pinterest boards (VR one in particular so far) for anyone interested.
\
I model in blender and sometimes use the casual script to render in blender, I also use will on occasion render in DS as an exr and then use blender as a compositor. While I certainly wouldnt mind a more robust bridge Casual's script is pretty good. You definitely have to tweak materials though, which in cycles means nodes. That's actually one of the advantages Cycles has over Iray for me, nodes offer a much greater deal of contol, and blender's are actually pretty easy to use (certainly easier than the shader mixer in DS). Another instance where control comes into play is its very easy to go into edit mode and pull vertices around slightly, like say if the arms are down fixing skin intersection and making the contact look a bit more natural
The other main advantage with blender is it has much more robust instancing this:
Is honestly one of my favorite renders of all time. theres a couple hundred thousand wheat plants there. It took less memory and time to render than a single untextured genesis lit by an hdr would. Trying to render something similar in DS would, on the other hand, bring my computer to a crashing halt. If you're ever interested in rendering things like grass or forests, Blender will beat DS up and steal its lunch money.
Similar to Instancing, Hair. Iray still doesn't have proper strand rendering for hair. Mesh objects, no matter how optimized, cannot compete. I also find Blender's hair styling tools a bit easier to use and get something good than either DS option (and i even consider my self a pretty deft hand at one of the DS options, but still find Blender's hair miles better). (i'll post some hair examples later)
On going back to blender to fix posing and not losing changed materials, there are some different ways to do this. Provided you have not changed any material zones you can save your original version and then when exporting your corrected scene select use mat library and your original scene this will tell the script to to use the materials from your original scene in your new scene.
I'm certainly getting some good responses to my little muti-part question. I am starting to see advantages already. For one, I spend an awful lot of time exporting posed figures and clothing to Blender from DAZ just to fix the way conforming clothing doesn't conform. So having the editing tools available immediately would be a time saver. The node system is scary to me but then the Blender interface was scary too but I'm starting to like it now. I also have Garibaldi Hair which was less than impressive for my purposes. I remember spending the best part of a day trying to make a pony tail and gave up in the end.
I don't do much animation so tweaking static poses is my main concern.
The node system in Blender is the easiest one to understand of all of the nodal systems I've use so far. The nodes are named in ways that make sense to the user rather then from some arcane physics or mathematical reference that the end artist doesn't really need to wrap their brain around. Don't get me wrong, they do have physics and math nodes that are named such, it's just that the naming conventions and node systems are done more from an artists perspective, which makes it one of the best environments to learn shader design (imo.)
As a bonus (as previously mentioned) there are a lot of good examples available as well as many active threads in forums for learning from and asking questions.
So this would be like saving MAT presets in DAZ Studio, then? You could end up with a library of textures/mat poses for a growing set of content.
and my best example of "hair you have no hope of ever doing in DS, but blender absolutely kills"
Its a slightly abandoned render, I still need to work o the styling and shader a bit, I cant even begin to comprehend how I would approach this in DS. that hair has about the same memory hit as the figure and its textures.
Its not quite as simple, because this created library can only store one set of textures for any object (like say genesis 3 female) you could work around this by naming your characters before export.
In blender, particularly for Daz content, its actually much easier to transfer vertex locations than material data so I usually go at it in a completely reversed direction. My method is more often: Add the object with the materials I want within blender (usually by appending from another blend file) > select the matching object with the pose I want and then the shift-select new object with the materials I want and in the vertex data panel under shape keys click "join as shapes" basically teansfering the object shape rather than materials, but its such a blendery and backwards way of doing things thats its not really something I'd reccomend
I forgot to mention earlier btw, but there is going to be a proper ubershader node coming soon to blender so less node wrangling might soon be necessary if you want to avoid it.
Personally, I'm not a fan of ubershaders*. I prefer purpose built shaders. But I assume some people must like them or they wouldn't exist so for that reason I guess it's a good thing. ;)
----------------------------------------------------------------------------------------------
* I don't find having a lot of parameters that have nothing to do with the material I'm working with to be an advantage but rather unnecessary clutter. Since we can have a material library of purpose built shaders that we can draw from it just makes more sense to me to do it that way.
Forgot to mention, beautiful examples J Cade. :)
Don't forget purpose built shaders will also be slightly faster, but, if you're coming from a non-node paradigm, an ubershader can definitely be less intimidating, also you don't have to worry about things like energy conservation, which is another shift in how one has to think about things
I don't see how an uber shader would be less intimidating then a purpose built shader with just the inputs that make sense for that material.
When creating a shader, well.. one should invest in how to create shaders if one wants to create shaders and in that, Blender makes it as easy as possible. I just fundamentally see ubershaders as a solution looking for a problem. That is, someone who isn't interested in learning how to create shaders having a tool to create shaders when the real answer is to provide them with a simplified library of shaders already built. In the DAZ/IRay environment we are dealing with a different beast since there is not a library of simple nodes from which to build shades from. One either has to code their own, use somewhat esoteric tools and more esoteric constructs, or an ubershader if one wants to create a new material, so there an ubershader makes sense as a stopgap to a less then ideal situation.
Well the ubershader will come automatic in blender, you won't have to download or build any sort of shader. It might even be replacing plain diffuse as the node that gets created by default when you create a material node. Another words: create a new material, add any image maps, do it all right in the surfaces panel all without ever even looking at a node. Not everybody likes nodes, I don't personally understand it but its true.
Thanks although I am a long way from having a clue what you are talking about. With Blender, I'm constantly procrastinating - thinking, maybe in the next version they will introduce an easier way of doing things. At the moment, much of it seems targeted at experts. Basically, DAZ Studio is all I know (and that, not comprehensively). I didn't come to it from Poser or Maya or 3DS Max. I don't have a photography background or prior experience with shaders. I'm still not sure I know the precise difference between a shader, a material and a texture (not to mention a surface). So for me, Blender is hard when it comes to these things outside of my pose & render small world.
I remember when I used Reality/Luxrender before I bought a decent GPU for IRay: Paolo (the author of Reality) created a material editor specifically to avoid a node system. I also remember some of the more technical people saying that they were switching to Blender because it has a node system. I found the Reality editor difficult enough to understand and I still don't know what all the sliders in the IRay surfaces tab do. That is why the node system is daunting.
Yes marble, for you and others who are interested in just creating the art without having to delve into the technicalities shaders require, the real answer is to simply have libraries of premade materials that have simple inputs that are specific to what that material might need. An example is a gold shader (material) which has simple sliders for polished vs aged or a material shader that has a drop down for weave, a drop down for fabric type, and an input for a repeating pattern, just that and nothing more. That is the 5 million crayola crayon approach* that the average artist would really be comfortable with I believe. By simplifying things like that, it allows the artist to focus on the part they really want to focus on in creating the art and therefore both makes them more productive and more likely to achieve their vision.
* Really, it would be a more focused set of boxes of crayola crayons that would be focused on a given artist/projects needs, npr/stylized (water color, oils, anime, american 50's comic) vs photorealistic, etc...