Post processing fragment shader: draw shape in world space?

Hi,

I have little experience with shaders, but I’m trying to set up a post-processing shader.
For that, I’m simply calling Graphics.Blit(…) on the Camera’s OnRenderImage(…) function. No problem there.

(I have edited the post from here on to rewrite the question to be a bit more specific)

Currently, I am simply trying to draw a circle at a position in the world (currently hard-coded at (0, 0, 2.5)).
Here’s my shader so far:

Properties
	{
		_MainTex("", 2D) = "white" {}
	}
	SubShader
	{
		ZTest Always Cull Off ZWrite Off Fog{Mode Off}

		Pass
		{
			CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

			struct v2f
			{
				float4 pos : POSITION;
				half2 uv : TEXCOORD0;
			};
 
			v2f vert(appdata_img v)
			{
				v2f o;
				o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
				o.uv  = MultiplyUV(UNITY_MATRIX_TEXTURE0, v.texcoord.xy);

				return o;
			}

			sampler2D _MainTex;

			fixed4 frag(v2f i) : COLOR
			{
				fixed4 orgCol = tex2D(_MainTex, i.uv);

				fixed4 col = orgCol;

				// Circle at (0, 0, 2.5) in the world
				float3 camToCenter = float3(0.0f, 0.0f, 2.5f) - _WorldSpaceCameraPos;
				float3 camToCenterN = normalize(camToCenter);

				// Camera base vectors
				float3 cameraRightW = mul((float3x3)unity_CameraToWorld, float3(1, 0, 0));
				float3 cameraUpW    = mul((float3x3)unity_CameraToWorld, float3(0, 1, 0));
				float3 cameraFwdW   = mul((float3x3)unity_CameraToWorld, float3(0, 0, 1));

				// Screen size
				float scrW = _ScreenParams.x;
				float scrH = _ScreenParams.y;

				// Ray through current pixel
				// w = width of screen in world space
				// h = height of screen in world space
				// w = 2 * tan(0.5 * fovHor), assuming screen is 1 unit in front of camera (in world space)
				// fovHor = 2 * atan(1 / unity_CameraProjection._m11)
				// => w = 2 / unity_CameraProjection._m11
				float w = 2.0f / unity_CameraProjection._m11;
				float h = w * scrH / scrW;
				float3 ray = cameraFwdW + (i.uv.x - 0.5f) * w * cameraRightW
									    + (i.uv.y - 0.5f) * h * cameraUpW;
				ray = normalize(ray);

				// Make circle of radius 1 (in world space)
				float alpha = acos(dot(ray, camToCenterN));
				int makeRed = (abs(length(camToCenter) * sin(alpha)) < 1.0f);
				col.r =  makeRed;
				col.g = (makeRed ? 0.0f : col.g);
				col.b = (makeRed ? 0.0f : col.b);

				// PROBLEM: resulting circle is:
				//   -> Not of radius 1 (appears to be 1.5)
				//   -> Not at (0, 0, 2.5) when not looking exactly at (0, 0, 2.5)

				return col;
			}
			ENDCG
		}
	}

The problem I am having:
This is supposed to draw a red circle of radius 1 at (0, 0, 2.5). When I look at (0, 0, 2.5), there is a red circle there, and I can move the camera around and it still works.

But when I am not looking exactly at (0, 0, 2.5), the circle moves a bit away as the camera rotates.
Moreover, the radius of the circle isn’t actually 1. When I add a sphere at (0, 0, 2.5), it is smaller; I need to give it a radius of 1.5.

I noticed that a bunch of things aren’t quite as easy as in regular shaders, since this shader is applied as a post process.

Your question is a bit confusing. A fragment is a pixel unit in 2d screen space. That means the fragment position is already projected into 2d. Inside the projected 2d space all rays from the camera are actually parallel and equal to (0, 0, 1).

If you ask about getting the worldspace position of the fragment before the projection, you usually just calculate it in the vertex shader and pass it as seperate value to the fragment shader. See the code in this question for reference. i.wpos inside the fragment shader will be the worldspace position of that fragment. Of course to get the worldspace direction from your camera to that point you just subtract the worldspace position of the camera.

ps: Questions like this focus on a specific solution without mentioning the problem. It’s usually better to at least mention what you actually want to do. In a lot cases there are way easier solutions. However since we don’t know what you want to do with that “ray” we can’t suggest anything else.

Okay, I figured it out. Taking a break often helps :).
The problem was in the logic.
Two errors…

  1. The fov obtained from the projection matrix is not the horizontal but the vertical fov.

    // w = 2 * tan(0.5 * fovHor), assuming screen is 1 unit in front of camera (in world space)
    // fovHor = 2 * atan(1 / unity_CameraProjection._m11)
    // => w = 2 / unity_CameraProjection._m11
    float w = 2.0f / unity_CameraProjection._m11;
    float h = w * scrH / scrW;
    Should have been:

    // h = 2 * tan(0.5 * fovVert), assuming screen is 1 unit in front of camera (in world space)
    // fovVert = 2 * atan(1 / unity_CameraProjection._m11)
    // => h = 2 / unity_CameraProjection._m11
    float h = 2.0f / unity_CameraProjection._m11;
    float w = h * scrW / scrH;
    This now makes the sphere stay at the correct position.

  2. Calculating the ray-sphere “intersection” was not quite right.

    float alpha = acos(dot(ray, camToCenterN));
    int makeRed = (abs(length(camToCenter) * sin(alpha)) < 1.0f);
    Was not correct. Instead this does work:

    float3 q = camToCenter - ray * dot(camToCenter, ray);
    int makeRed = (length(q) < 1.0f);