what is the difference between "0.2f" and "0.2"?

I’ve seen this in a lot of tutorials, i mean, what is it actually? and when is the right time to use it?

The real reason is a throwback to the 60’s, when we had to be obsessed with how much space everything took. Many “types” were just alternate ways to specific an integer or a floating point number, using just barely enough space as we needed.

For example, regular integers use four bytes, going from about -2 billion to +2 billion. Characters are stored in only 1 byte, 0-255. So, in the old days, if you had a number going from, say, 0 to 150, you wrote unsigned char n;, and saved three bytes. You were abusing a 0-255 character slot, to store a number, like selling mini-donuts in an egg carton.

If you knew a number would only be +/- a few thousand, you used short int n;. That said to store n in only 2 bytes. Or if you knew it would be only positive 0 to less than 60K, you used unsigned short int n;. Unsigned means the negative numbers were repurposed as positives, and it could count twice as high.

Originally, floating-point (decimal place) numbers were stored in 2 bytes, giving 7ish significant digits. In other words, 1/3 was 0.3333333, and then trailed off as either zeros or random numbers. This was considered pretty accurate. We called them floats: float n; But, just in case, a double-precision floating point type was made, using 4 bytes, and 12ish(?) significant digits.

When memory suddenly became a thousands times cheaper, all those tricks became mostly pointless. Everyone was taught to use double n; to get the most accurate results. And float n; became a thing of the past. By the time C# came along, they decided that a regular written-out number should count as type double.

In other words, 1.0 is really four btyes: 1.00000000000, and 1.0/3.0 is four bytes: 0.333333333333.

They decided that if, for some insane reason, you cared about saving 2 bytes and wanted a float, you could put dot-f at the end. So 1.0f is really two bytes holding 1.0000. And 1.0f/3.0f is merely 0.333333.

Unity comes along, with C# for games. In games, getting a decent frame rate is much more important than accuracy, and less memory used means a little more speed. Plus memory is important in cell phones. So Unity breaks out the old, obscure float to store everything, with that dot-f on the end which we thought no one would ever need anymore.

So, here’s the error: float n; n = 1.0;. Now, that 1.0 is really a four-byte 1.0000000000. Since n is a float, it can only hold 1.00000. The computer would have to chop off the last six zeros to make it fit. You know they are only zeros, so won’t matter, but the computer doesn’t. It could be chopping off 4’s or 9’s, so it gives you all those “can’t put double in float” errors.

The ‘f’ makes the number a float rather than a double. Unity uses floats. Numbers like 0.3 in C# are doubles, so the ‘f’ is necessary. ‘f’ is also supported in Javascript but unnecessary since Javascript defaults to floats.

If your talking in the context of c# and sending a parameter to a method/function 0.2 is implicitly cast to a double at compile time, whilst the f on 0.2f instructs the compiler to cast the number as a float.