Use Iray, only for selected models

Hi there, is it possible to render only some objects in Iray only? I wanna render only the characters in Iray and the backgrounds I will leave on texture mode only, since my focus are more on the characters and I don't have too much hardware to render everything together. I wanna do this, without the need to hide the background by clicking the eye icon, because I wanna see how the character will look like too in the scene.

 

Thanks

Comments

  • bruno2bruno2 Posts: 30

    I render using CPU btw. That's why I need this.

    I have GPU, but DAZ doesn't recognize it, it's Intel UHD Graphics 620.

  • GranvilleGranville Posts: 696

    You would do two renders. Hide the background objects / props and render the characters in Iray with render dome turned off. Then hide the characters and render the background. Composite in photoshop or gimp. 

  • Dave63Dave63 Posts: 49

    It will not help solve your specific problem, but you could use: Render Settings/Advanced{tab}/Canvases to do what you are asking.

    It would be simpler to just hide the background,though, because using canvases will not preserve resources as well as that method 

    :

  • alan bard newcomeralan bard newcomer Posts: 2,202
    edited March 2023

    fixed camera ... the complete scene is a composite of the layers shown color coded. Plus the cars are all separate and the background and the rain effect. 
    Not only can any part be rendered separately ... they are actually all scene subsets so that I don't need to open the 300 mb file
    the big one used to take at least an hour to render ... the parts average 3 to 5 minutes a piece. 
    create your camera and then lock it. If you have a huge complex scene you can them save pieces of it as subsets just remember to save the camera with each one. 

    23 hb base3-2-40per2.jpg
    2400 x 1200 - 1M
    23 hb base3-alt.jpg
    6000 x 3000 - 2M
    Post edited by alan bard newcomer on
  • WendyLuvsCatzWendyLuvsCatz Posts: 38,237

    environment tab background 

    import image

    or assemble png layers in a photo editor 

  • PadonePadone Posts: 3,700

    You are confusing two very different things. Layering rendered images with a paint program, and compositing render layers with a compositor. The first will not preserve the interaction among elements as shadows and reflections, also doesn't allow depth effects since there's no z-buffer, also doesn't allow to retarget lights or tone mapping since it's not HDRI.

    That said under some conditions layering rendered images can be a good approach especially for comics.

  • bruno2bruno2 Posts: 30

    Thank you all for the tips =) I will take a look on everything here.

    With a Nvidia, is it possible to preview the Iray render as fast as it's currently the texture mode? I mean, when I switch from Texture mode to Iray, it takes like 10 seconds to show the scene, and from a frame to another frame, more 10 seconds, so it's impossible to have some preview.

    With a Nvidia, this preview is more fast?

    Thanks!

  • Richard HaseltineRichard Haseltine Posts: 101,051

    That is the scene data being prepped and sent to Iray for rendering. I suppose a faster PCIe slot might help a little, but I suspect that the actual transfer of data is less important thean the preparation stage which is done in system memory.

  • PadonePadone Posts: 3,700

    Yes with a nvidia card the preview is faster, provided that the scene fits the card memory that may not be the case with complex scenes. In most cases the scene optimizer is mandatory to work with the GPU, even with high-end cards. Consider that going from 2K to 4K textures requires 4x vram, going from 2K to 8K requires 16x vram, so a 6GB card that works fine with 2K textures will need 24GB for 4K and 96GB for 8K. With G8 G9 most daz assets use 4K 8K textures, other than HD geometry. That's absolutely ridiculous if you ask me, if you want to use the GPU there's no optimization at all.

    https://www.daz3d.com/scene-optimizer

     

  • alan bard newcomeralan bard newcomer Posts: 2,202
    edited March 2023

    Padone said:

    You are confusing two very different things. Layering rendered images with a paint program, and compositing render layers with a compositor. The first will not preserve the interaction among elements as shadows and reflections, also doesn't allow depth effects since there's no z-buffer, also doesn't allow to retarget lights or tone mapping since it's not HDRI.

    That said under some conditions layering rendered images can be a good approach especially for comics.

    The image below was taking an extremely long time to render probably related to the fact that the engine was checking all vectors between the light (behind the dragon) and the backdrop. 
    The human eye can look at a read scene like this and see whether there is any red light on the background because it's seeing light that has completed it's move  (at the mere speed of 186,000 miles per second). 
    The render engine doesn't have that option it checks the pixels in the back with the light just to find out if they interact. Same with the image for the eye reflection. I just took advantage of the fact that I can see there's no interaction between the back scene and the front image. 
    According to theory the human eye can discern a candles flame at 1.6 miles under optium conditions. Neither the eye nor the iray is going to see anything that light is interacting with. 
    The second image the same ... there is an interaction between the very bright red dragon outside the view but there's no interaction with the background. 
    ---
    Both versions are lit with only a sky of iradience and the emissive dragon. 
    ---
    As for the first image the buildgins and the train each layer was set up because they don't interact with another ... if they did I would simply render those sub-scenes together in daz. 
     

    spring rise 11 base 2.jpg
    4000 x 2000 - 1M
    ginger dragon eyes.jpg
    1700 x 1018 - 1M
    Post edited by alan bard newcomer on
  • Richard HaseltineRichard Haseltine Posts: 101,051

    alan bard newcomer said:

    Padone said:

    You are confusing two very different things. Layering rendered images with a paint program, and compositing render layers with a compositor. The first will not preserve the interaction among elements as shadows and reflections, also doesn't allow depth effects since there's no z-buffer, also doesn't allow to retarget lights or tone mapping since it's not HDRI.

    That said under some conditions layering rendered images can be a good approach especially for comics.

    The image below was taking an extremely long time to render probably related to the fact that the engine was checking all vectors between the light (behind the dragon) and the backdrop. 
    The human eye can look at a read scene like this and see whether there is any red light on the background because it's seeing light that has completed it's move  (at the mere speed of 186,000 miles per second). 
    The render engine doesn't have that option it checks the pixels in the back with the light just to find out if they interact. Same with the image for the eye reflection. I just took advantage of the fact that I can see there's no interaction between the back scene and the front image. 
    According to theory the human eye can discern a candles flame at 1.6 miles under optium conditions. Neither the eye nor the iray is going to see anything that light is interacting with. 
    The second image the same ... there is an interaction between the very bright red dragon outside the view but there's no interaction with the background. 
    ---
    Both versions are lit with only a sky of iradience and the emissive dragon. 
    ---
    As for the first image the buildgins and the train each layer was set up because they don't interact with another ... if they did I would simply render those sub-scenes together in daz. 
     

    The human eye doesn't do anything in that sense - it just receives all the photons that end up hitting it. Iray is simulating that, but it can't run each photon in a small fraction of a second so it has to plod through tracing the paths taken by light rays starting with varied start positions (for non-point sources) and angles until it reaches a settled value for each pixel. If that mostly relies on several bounces to get from source to whatever is in a given pixel then it will take a long time for that pixel to get to a settled value.

Sign In or Register to comment.