Differences in Depth Buffer inside the editor and outside the editor

Hello!

I’m working on a VR Project (Gappo’s Legacy) and so far everything was working pretty well, but recently we discovered that our raymarched explosions fail to work properly on a MSI GE62VR. Even worst: they only fail on a non-developer build (in the editor everything works great).

(On non-laptop computers with 1060 and 1080 everything works fine).

After some tests, we have determined that the error is in the depth check we perform so opaque objects will oclude the raymarched explosions. Basically, in the GE62VR Build the explosión would only be visible when happening really, really close.

I’ve read a lot about different formats and clip space Z values depending on platform, and I’m evaluating coding my own distance check, but considering that everything works fine in the editor, I believe that locating the difference in both cases will be simpler.

But so far I haven’t had any luck finding out what the difference is and so how to force it to happen.

Also, to depth check we are using the LinearEyeDepth() function which, according to the manuals, is already handling the platform differences ok.

// Using this to get the Scene Depth
float sceneZ = LinearEyeDepth(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(v2fIn.projPos)).r);    

// Using this comparison to stop raymarching
if (distanceToCameraAtWorldSpace > sceneZ)
{
	break;
}

Any idea on what can be causing this different behaviour inside and outside the editor? Thanks!


Edit:

I’ve run some tests, and have discarded that the Z-Inverse and the OpenGL Z-Range as the origin for the problem. So far, it looks like the error has something to do with unproper Z values when running the Build.

I prepared this shader:

float sceneZ = Linear01Depth(UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(v2fIn.projPos)).r));
		sceneZ *= _ProjectionParams.z;
		sceneZ += _ProjectionParams.y;

		if (sceneZ < 1)
		{
			return float4(1, 0, 0, 1);
		}
		if (sceneZ < 2)
		{
			return float4(0, 1, 0, 1);
		}
		if (sceneZ < 3)
		{
			return float4(0, 0, 1, 1);
		}
		if (sceneZ < 4)
		{
			return float4(1, 1, 0, 1);
		}
		if (sceneZ < 5)
		{
			return float4(1, 0, 1, 1);
		}
		if (sceneZ < 6)
		{
			return float4(0, 1, 1, 1);
		}
		return float4(1, 1, 1, 1);

And the result in the editor was this:

The result in the build was this:

So, clearly, in the build versión the return values for Linear01Depth() are different, and in fact under 1.

Am I doing something wrong? My next test will be checking for more little values.


Edit 2:

Ok, after some work, I’ve nailed down the problem. Looks like Linear01Depth() is returning the same value all the time, something between 0,565 and 0,57.

I found this through this shader:

	float sceneZ = Linear01Depth(UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(v2fIn.projPos)).r));
	sceneZ *= _ProjectionParams.z;
	sceneZ += _ProjectionParams.y;

	if (sceneZ < 0.56)
	{
		return float4(1, 0, 0, 1);
	}
	if (sceneZ < 0.565)
	{
		return float4(0, 1, 0, 1);
	}
	if (sceneZ < 0.570)
	{
		return float4(0, 0, 1, 1);
	}
	if (sceneZ < 0.575)
	{
		return float4(1, 1, 0, 1);
	}
	if (sceneZ < 0.580)
	{
		return float4(1, 0, 1, 1);
	}
	if (sceneZ < 0.60)
	{
		return float4(0, 1, 1, 1);
	}
	return float4(1, 1, 1, 1);

Which yields a green square. What can be happening here?

Ooooook, looks like the default graphics setting (Fastest) won’t generate a proper Depth Texture in the GE62VR, but it will on our other non-laptop computers.

Solved.