Why do those two calculations return different values?

Hi,

I want to display the first three numbers after a point of a float value.
Example:
float 61.001 => 1

I have done this (using c#):

float value = 61.001;
float temp = (value*1000);
int retVal = ((int)temp)%1000;

retVal has value 1 → This is correct

but when I am not using the temp var:

float value = 61.001;
int retVal = ((int)(value*1000))%1000;

retVal has value 0, what is not correct.

I really wonder why those two calculations return different values.
In my three or four tests done, the second implementation seems to return 1 less than the other. (ie. 58 instead of 59)

This cant be caused by float rounding, can it?
Is there a optimization that causes this difference?

Thanks!

If you want to print float rounding to 3 decimal points you should call value.ToString(“0.000”);

I turned retVal to a float and took out the parse and I get the value

0.9994507

Without the %1000 it returns 61001, spot on.

The error itself is in the (non) trucation of tiny values from value*1000. If performed all in the same calculation the tiny decimals seem to be retained. When you separate it with the temp variable the value is 61001 on the dot, so returns 1 as expected.

So, yes, float error is the culprit, but int rounding is what gives the 0 as even 0.999999 float = 0 int. Always rounds down.

It’s about double precision and rounding to floats - Here is the answer:
http://stackoverflow.com/questions/8911440/strange-behavior-when-casting-a-float-to-int-in-c-sharp