How do int textures work in ComputeShaders?

I’m having trouble understanding how to write into a RenderTexture with an integer format (e.g. RGInt). The following code leads to a completely black texture but by my understanding, it should be yellow:

C# Script:

using UnityEngine;

public class IntTextureTest : MonoBehaviour {
	public RenderTexture renderTexture;
	public ComputeShader computeShader;

	void Start() {
		renderTexture = new RenderTexture(1024, 1024, 0, RenderTextureFormat.RGInt);
		renderTexture.enableRandomWrite = true;
		renderTexture.Create();

		computeShader.SetTexture(0, "Result", renderTexture);
		computeShader.Dispatch(0, renderTexture.width / 8, renderTexture.height / 8, 1);
	}
}

ComputeShader:

#pragma kernel CSMain
RWTexture2D<int2> Result;

[numthreads(8, 8, 1)]
void CSMain(uint3 id : SV_DispatchThreadID) {
    Result[id.xy] = int2(0x7fffffff, 0x7fffffff);
}

I have verified that the texture format is supported using SystemInfo.SupportsRenderTextureFormat and I tried the same example with a float texture, which worked fine.

Idk, but RWTexture2D<float2> works just fine. My guess is that RGInt might be describing hardware data encoding method and not software layer.

#pragma kernel CSMain
RWTexture2D<float2> Result;

[numthreads( 8 , 8 , 1 )]
void CSMain ( uint3 id : SV_DispatchThreadID )
{
    Result[id.xy] = float2( id.x/1024.0 , id.y/1024.0 );
}

I’m still hoping for better solutions but this is what I came up with so far:

Since apparently it’s not possible to write ints or uints to a texture, I decided to use a float texture. Floats can represent integers up to 16777216 accurately, which isn’t much but may be enough for some use cases. BUT you can actually do much better by using a different kind of conversion between floats and ints. HLSL provides intrinsic functions for reinterpreting bit patterns as floats or ints/uints. This should be cheap or even completely free. If you’re only working with ComputeShaders and convert just before writing and just after reading, then this should work fine with any int/uint. However, there is a small but annoying catch: if you perform any float operation on a denormalized float in HLSL then that float is rounded to zero. Unfortunately this also seems to apply to sampling a texture using tex2D & friends. If you (like me) want to write a texture in a compute shader and then use that texture in a regular vertex fragment shader then you have to make sure that you only store uints that result in normalized floats. You can do that by setting the second highest bit of the uint to 1. In practice, this means that you can’t easily use the upper two bits but that still allows you to store uints up to 1073741823, which is significantly better than just using floats.

Here are the conversion functions (as macros so that they work with arguments of any dimension):

#define EncodeUintToFloat(u) (asfloat(u | 0x40000000))
#define DecodeFloatToUint(f) (asuint(f) & 0xBFFFFFFF)

It’s an ugly solution but seems to work. If you have a better one, please share.