… it doesn’t work. It’s not the same. And I am unable to make it work in this RenderTexture + SetGlobalTexture way.
What I’ve tried so far:
Various sorts of RenderTexture formats
Assign the camera’s RenderTexture directly to the shaders using them. This also behaved weirdly, that’s why my guess is that my RenderTexture has different contents from _CameraDepthNormalsTexture.
When the camera renders to a texture, it renders what it can see (based on the applied culling mask filters) NOT the depth normals. You access the depth and normal data from within the shader using the inbuilt _CameraDepthNormalsTexture variable (which gives access to the GBuffer from the GPU side).
The only time that I’ve seen the shader variable being accessed outside the GPU is to pass them into a compute shader. If you are using compute shaders you have to pass this into the shader ComputeShader.SetTextureFromGlobal(CSMainkernalID, “_DepthNormalsTexture”, “_CameraDepthNormalsTexture”);
I am not using a compute shader. Instead, in my forward rendering setup, I have a surface shader which accesses the _CameraDepthNormalsTexture of a camera A with depth -1. There is also a camera B with depth 0 around, but like this, the surface shader uses camera A’s _CameraDepthNormalsTexture.
Now if I switch the forward rendering setup to a deferred one, the surface shader doesn’t work out of the box anymore. This didn’t come as a surprise to me, and I thought the easiest way to circumvent this is to simply draw the camera’s rendered depth and normal data (GBuffer) to a texture and then just reference that texture in my surface shader. After all, since I can do that with an albedo texture, why not with the depthNormals texture.
Should I also use something like MySurfaceShader.SetTextureFromGlobal(SSMainkernalID, “_DepthNormalsTexture”, “_CameraDepthNormalsTexture”); or do you have a suggestion how I can make sure that my surface shader has access to camera A’s GBuffer (which btw indeed has a specific culling mask)?