Rendering two cameras to one target texture

Hey there.

I’m currently taking a camera and rendering it to a texture which I then put on a mesh. I don’t want to apply a post effect to certain objects in the scene, and the usual way to do this is to have two cameras - one to render most of the scene and one to render the non-effected objects.

The problem is that I need to put the render texture on the mesh… can I make the two cameras share the same render texture or something? Is there any way to do this?

EDIT - I successfully applied multiple cameras to a shared render texture, but now I’m having an issue with depth. The second camera needs “Depth Only” set, FYI, if that changes anything. But basically these non-effected objects are now drawn on top of everything, which is obviously not what I want. It really seems like this should be way easier.

I might just give them some insane color and then have my shader turn that color into something else, or something… hm.

Hi @demonpants,
→ 6 years late…did you find a good solution on how to do it?
I’ve come across the same issue :slight_smile:
Many thanks!