Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
I'm really enjoying some of Gleb's videos, not just for what they contain but for some of the thought processes they trigger.
For most people who study photography, we've been introduced to using B&W to remove the distraction of color and train our eye to see the underlying texture in objects and how they play out in setting up a basis for our image. B&W photography is an exercise we must do for a period of time if we want to understand this aspect of images.
Gleb takes it to another level, breaking down the images to the base shapes and how they make up the most basic visual structures and composition of the images.
I would suggest exercises taking our 3D images and breaking them down to silhouettes to develop our eye just as we did with B&W. Here is the video that inspired that thought: How to Look at Shapes Instead of 3D Models, When Setting Up Lighting.
Art tends to follow trends, which is ok until the trends become troughs. Gleb did an interesting video on ambient lighting which touches on some important concepts in lighting. Now my take on it, while based in what he covered does vary some as I'll explain.
In the opening, Gleb has a slide with a quote from Richard Kelly.
"Ambient Luminescence minimizes the importance of all things and all people.
It fills people with the sense of freedom of space and can suggest the infinity."
This slide also contains some good example images.
Unfortunately, he doesn't go on to explore this idea more but rather steps right into adding more light. This idea of an image of just ambient luminance goes against our idea of lighting in most current lighting setups. We go right for the 3 light setups, the point lights to draw our eye to a 'focal point' or dramatic lighting to create a 'mood.' But what if our mood is best created with basic ambient lighting... a very open, spacious feel where individual aspects of the image are diminished. Many images that have a fog effect would benefit from either this or the extreme opposite, a feeling of closed in where everything falls into dark shadow. The point is, as Gleb points out in his closing slide, this aspect of lighting, flat ambient, is a playground of it's own. One that has languished and is being ignored by most current artists as we chase after ever more complex and dramatic lighting.
Ok, so on to the next point where he adds an area light to increase the ambient occlusion. His visual with the two spheres with ambient occlusion demonstrated another interesting topic (not cooler, just different.) Here he demonstrates at :56" an image with two simple spheres in a very desaturated slide with text. The combination of desaturation with strong ao shadows produces an interesting dramatic effect which has just the right balance of punch to create a good graphic design. So, rather then being an extension of the flat ambient, it steps into another lighting format where shadows come to the forefront without the distraction of directional lights. Still technically a form of ambient lighting, but far from the same effect of flat ambient where the shadows are muted as well.
In his third example, he shows using an HDRI to create ambient lighting. The concept here is that there is no other lighting other then the HDRI, but a related point which he didn't mention that I think bears mentioning is the choice of HDRI to create a more flat lighting. Again, the point is that HDRIs can be used to create a wide range of lighting situations and if the mood we wish to convey with our image incorporates the opening quote, then HDRIs can still be used, but with a mind to the effect we are trying to create.
At 1:55, he makes a good point about ambient lighting which is that we don't need the same level of bump/normal detail as in a more dynamic scene. Technically, this only applies to 'flat ambient' as once we start to work with the ao shadows, the shadow detail does indeed become more important, but if we are working with the first example of 'flat ambient' then we absolutely won't see the differences of bump/normal maps and therefore can save render time by not including them, just as we don't need them with anything that is out of our dof (depth of field.) An extension of this is that if we are creating a 3D space such as vr or a game environment, we can adjust the level of bump/normal detail vs baked textures on whether a portion of the environment is in background flat ambient or foreground/spot lighting situations. By balancing these, we can again optimize our environment to use less resources while maintaining an illusion of a much higher overal quality, just as we would with LOD.
So, while my take might vary a little, I give full credit to Gleb for setting the stage and inspiring me to dig through my own past experience and learning to piece together some important points I hadn't fully put together before. I think that if we can inspire others to think it's often better then showing how to. Give a man a fish...
Watch the video, see what you draw from it, what ideas are inspired by it and post them.
Related to the last post, I looked up Richard Kelly and found a lot of interesting information and would recommend others doing the same. Google Richard Kelly Light Theory or your own variation on that and do some poking around, it will be worth the time. A quick one of interest is The Six Qualities of Light which also lists his take on the 3 elemental kinds of light.
Another thought occurred to me on flat lighting with low poly objects, expansiveness and some desaturation. These aspects define what we see in the distance when we have a scene that has an expansive backdrop. If we combine these aspects of using low poly objects, no other lighting other then HDRI for the most part, (some sun if/when needed) and slightly desature (and if appropriate, dim) the resulting image, we can combine that with our mid and foreground images using the same HDRI with more detail and directional (focus) lighting, especially on the foreground objects.. we can layer the scene (composite) in a way that optimizes render time and helps tie the whole scene together.
Many movies in the past making of would have hand painted backdrops, but I'm seeing more and more of this technique in modern demo reels and it is well within our current toolsets to do even at an indie level.
I really like this video where Gleb shows how complex lighting can be achieved with some pretty simple techniques and a bit of playing.
Another Blender render! :D
Using my OC and turned him into a badass cop!
Very nice. :)
I've started a Facebook page on CG, Art, etc.. called Art of the Web for anyone interested.
this was a great project Andrew Price tutorial on making an Earth seen from space image that gets you involved with some of the composting functions in Blender without going too far over your head if you have some basic knowledge of Blender (and none with the composite functions) . Essentially it's a primitive so your modeling skills are not really tested but you do want some basic understanding to move around in the interface under your belt.
Yes thanks for that link. I remembered the tutorial but didn't remember who did it or where it was from. It ties in perfectly with the Natron tutorial. :)
I'm going to be playing with trying to find the balance going forward between posting on the Facebook page mentioned a couple posts ago and posting here regarding Blender.
The original point of the thread was to help people who were newer to Blender and those who were trying to tie it in with DAZ Studio. I like to post some of the advanced things like this on How to create a Particle Based Audio Visualizer for inspiration and news articles that are noteworthy but don't want to get too far off track. So, if you happen to find this added information of interest then definately check out the FB page.
* Having said that, I find the community interaction here invaluable, so that will also play some into what I post where.
Well, I gave it a shot. Used mjcteleblender script and imported the scene into blender. I have a problem when I attempt to render with cycles though - its like there are no textures. I can't for the life of me figure out why.
I would recommend retexturing in Blender regardless.
With the advent of Iray in DAZ Studio, there's not much benefit to attempting to render DAZ content with Cycles in Blender. I have used the export script and may have been the first person to render a DAZ figure with Cycles subsurface scattering. (See page 9-10 of the mjcteleblender thread.) If you're new to Blender, you'll put more effort into getting something decent out of that script than just learning how to use Iray. Mcasual himself said that the DAZ Studio default shader is what his script looks for, so imagine how much information is lost when you've got more advanced 3delight shaders being converted. There's probably a good reason that those advanced materials/shaders exist, right?
Learn 3delight and/or Iray. Use Blender to make character morphs, clothing, accessories, etc., for your DAZ Studio renders.
Actually, there is a lot that one can do rendering in Blender that one can not do in DAZ Studio. IRay is a very nice render engine but the list of reasons for Cycles and Blender is so long I wouldn't know where to begin. As to using scripts to convert materials, as I said, one should learn to texture for the render engine they are using as any conversion process is by definition very limited at this point. Going forward, we may get to a time when enough of the materials carry over that one can get a decent start with a conversion especially with the advent of some amount of standardization around PBR.
That seems like such a daunting task. I wouldn't even know where to begin.
I disagree, strongly. And I love Iray, but there's a lot that you can do rendering wise in blender that you can't in Iray, strand hair for instance, also much easier instanced geometry (I can render a forest and use only 150mb of memory) That said if you are new to studio, yes you should probably start with Iray, but, if you're already fairly competent, rendering in cycles is a great way to expand your horizons. Among other things, the only reason I can use shader mixer in daz is because I was already familiar with setting up materials via nodes in blender (blender nodes are roughly 10 million times easier than shader mixer in case you're wondering)
@lotharen you don't need to completely setup everything, at the very least teleblender imports most of the mats, but it only creates a fairly basic plastic-y material. Blender's node system is actually pretty straight forwards mostly its just using the mix node to combine the different things you want. Its basically building blocks. You want SSS? add a SSS node. Glass is great you add a glass node, and that's it.
There are also a lot of materials that people have put up online I find them useful for looking and learning how to do stuff, but they're also good to just use
I actually miss nodes when setting up materials in Iray
Which comes back to your point about there being no good reason to render in Cycles/Blender. While there are plenty of good reasons, one has to look at their time and resources and decide what will get them where they need to be within their constraints. That being said, it may be that focusing on IRay will work best for you.
One advantage to learning Cycles is that there are a lot of resources available and it is a well developed render engine with a very capable node based material editor. I am guessing IRay will eventually have that as well as that seems to be the trend so learning Cycles would help people use more of the functionality of IRay also as it becomes available (if my thinking is correct on this.)
Don't get me wrong, I'm not trying to make an argument for any specific direction as I believe it all comes down to the individual, but it helps to understand what the differences are in different environments to be able to take into consideration the various tradeoffs and how they apply to our particular circumstanes.
There are many people in the DS community who are perfectly content with sticking with using all premade content, materials/shaders, pose and action cycles, and that is a perfectly valid choice which I fully support. There are others who fit somewhere between there and making content to sell (PAs.) We just need to pick our targets and go for them. But, for anyone who wants to know more about the underlying materials/shaders, etc... Blender offers not only a capable environment with a lot of flexibility, it offers a very good learning environment as well.
j cade and I posted at the same time so it's interesting to see we covered some of the same points but in different ways. (j cade did go into some examples I of course didn't mention as there is an extensive list of them, but those he presented are very good examples to start with.)
Do you see the textures in the viewport? At the bottom of your working window area are few options. Next to the object mode box is a box that allows you to switch views to shaded, solid, wireframe etc. It has an icon of a little ball. Do you see textures if you choose materials shading?
Look, I actually prefer Cycles to Iray. However, some people assume that the teleblender script is almost as good as a "plug and play" solution like what you get in the DAZ Store. But in the end you have to set up your own shaders from scratch for best results. You will likely change the lighting, since 3delight settings won't look or work the same way they did in D|S. And the characters in your scene come in unrigged. If you see something about the pose you'd like to change, well, export your scene all over again from DAZ Studio. Yes, it can be done. I've done it. It's a whole lot of work. Oddly enough, though, when testing out materials in Blender these days, I'm relying more on the old Blender Render mode and GLSL for quick prototyping instead of Cycles. By all means, though, if you already have the technical knowledge of Blender, you don't need my advice, which was intended for someone new to Blender.
Hey folks. Work has been crushing lately; can't get time for Blender OR DAZ Studio. Hopefully that will change for a few nights a week starting next week.
Gedd, I want to express my thanks to you for keeping this thread going. It will be a huge resource for me when I start getting some time again.
The source code for Cycles will be used as the next rendering engine option for Poser
http://blog.smithmicro.com/2015/07/28/poser-3d/the-future-of-poser/
The one thing Cycles does that neither Iray or Lux can is cartoon render, and far more flexible than the limits 3Delight has in DS. But for the most part I use Lux for DS and Blender, which as a no brainer since it's entirely free for Blender and I can apply the same logic between two apps for setups. I think buying Reality and using LuxRender was the catalyst for me to see if 3rd time would be the charm in going back determined to learn the Blender UI.
When I exported the first time no textures showed. So I checked the 'Collect Map' box in teleblender and in the view port it shows the textures on the figure. When I switch to cycles and render it just the porcelin white, like nothing carried over. I'm fairly new to Studio and Blender so I'm thinking this has a lot to do with my failure to grasp this lol.
I'm trying to find a good render engine that will basically do what Carrara does with updated software. Blender can do particles for fire, rain, snow, ect which I think will make my CG art look amazing for scenes that use it. I'm just having a hard time wrapping my head around it. For the longest I could get the UV map to work when modeling - it never looked right even though I did what the tutorial said to do. I ended up purchasing Silo2 just for its UV capabilities.
So, with that said. Thank you all for being here to help and I appoligize if I drive you crazy with questions.
if memory serves with that exporter it's basically an .OBJ with a .MTL file, the .MTL file links the location of the images back to the .OBJ
I could be wrong, I have not used it in a dogs age...
...Make that two dogs! And a cat!
Lol, so those of you that dont use the exporter - what do you do?
Ive used it, it's very cool but I'm not great a material adjustments in Cycles and it's far easier for me to try and work with LuxRender in Blender since most of the concepts are identical to those I have to use in Reality when I'm in DS.
Casual has made a wonderful tool, he's made dozens, and despite the fact English is probably not his first language he's very good with fixing problems and supporting the tools he's putting out there for free. I've never seen him in the commons but once a week he's got something new in Freebies if you want to ask him something specific.
The primary consideration I have at the moment is render times. I did try the mcjTeleBlender script but found the export was probably only the starting point. A lot of learning is required to be able to set up materials and lights correctly and I have not embarked on that journey yet.
I don't have, nor can I afford, a super-duper GPU so I'm stuck with CPU renders. This web review suggests that Cycles is slower than other CPU render engines so my initial interest faded with that information. I'm not sure, given that we can export the objects from DAZ Studio to Blender, why would we use Cycles in preference to Luxrender? Both are freely available for Blender and both, it seems, require setting up materials rather than relying on those exported from DS.
I have Reality 4.1/Luxrender. Is there a benifit for using Lux from Blender? Will it actually render the particles if done this way?
I'm trying to find the workflow best suited for me. I've used iray and get decent results and I've done the same with luxrender. I saw blender as a way to render everything without the need for heavy photoshop post work - like adding haze, god rays, fire or even clouds. I think if the render engine can do this you can make the scene even more believable and realistic - to a point. Thats what I was to achieve - I hope I'm heading in the right direction.
Blender's compositing capabilities may be an option to consider. Professional CG rarely uses a single pass render to make an image, since it's very common for parts of a scene to undergo revisions during the creative process. I remember the first time I watched "Big Buck Bunny," one of the Blender open movie projects. When I opened the scene files (back in version 2.49), I was surprised to see how nothing was rendered in a "finished" state. Instead, different parts of the images were rendered separately and put together in the compositor to create frames that were greater than the sum of their parts. Correct me if I'm wrong, but I'd be surprised if the latest Blender open movie, "Cosmos Laundromat," didn't also use multiple render passes that are combined in the compositor.
You can render different elements of your scenes and combine them like layers in image editing software. Render parts of your scene, like figures and environments, in Iray (or 3delight) where you can use your DAZ assets quickly and effectively. Then export that scene to Blender to use as a reference mesh for placing effects that you may not be able to do as easily in DAZ Studio. Import your D|S renders and use Blender's compositor to layer in the effects you've made in Cycles. It's still a lot of work, but not as much as modifying a hundred shaders/materials from figures, hair, clothing, accessories, environment, etc., for a single render that gets done all in one pass.