int(0.99999999999999999) == 1. Why exactly is that?
In most cases, when you int a float (or in C when you (int) a double), the effect seems to be that the decimals are just 'cut off'.
Funnily, this is not always true: In the given example, the result will be 1.
What exactly is going on?
I have figured out that it happens just beyond the precision cutoff point. So a double with 16 nines as decimals will be 'floored', add one more, and it will be 'ceiled'. For floats, the borderline is 7/8.
Can someone explain, and in a way that doesn't necessitate deeply understanding all the hardware intricacies of doubles and floats?
From what I have read
0.99999999999999999 is a double precision number that is too precise to be represented accurately in 64 bits and the next most accurate representation is 1.0 hence the value is rounded up.
from last digit as 5 and above it will be rounded up to 1.0
~ swim ~, I wonder how this 'too precise is precisely determined.
Does a double know the places after the 16th? Wouldn't it have to know in order where to round to? Wouldn't it need precision for this in the first place?
I just tried what you said.
0.99999999999999995 becomes 1, but 0.99999999999999994 becomes 0.
If we add another digit, this pattern seems to repeat: 0.999999999999999945 becomes 1, while 0.999999999999999944 becomes 0. 🤔
blackwinter, now I'm not sure anymore, would the double specifics be easier to master? 😂
I've read about half of it, but somehow my brain can't get past the '0.9999 == 1 - wait what?' point yet.
I guess I'm a case of the 'education problem' they mention... 🤔
I agree with ~ swim ~ .
In the docs(https://docs.python.org/3/tutorial/floatingpoint.html#tut-fp-issues) it is written -
"...so Python keeps the number of digits manageable by displaying a rounded value instead..."
So my best *guess* is that after a number exceeds a specific number of bits, python rounds it off.
Note: I might be wrong here. This is just my guess.
Since I have seen this post referenced several times in the Q/A I come to believe that I haven't made my point clear enough, or my edit came too late.
x = 0.999...9xyz... is 1.0 before it reaches the conversion to int. That is because there are too many '1's. If there are enough '9' in the fractional part, then you exceed the amount of available bits in the mantissa. If the 54th bit is 1, then it gets rounded up by the standard defined in the IEEE754. 1+1 is 0 carry 1, and that carry carries all the way through to the top, turning the mantissa from 111...1 into 000...0 and raising the exponent by one. So the result is 2 to the power of zero which is 1.0
That is the value 'int' sees. It doesn't matter if you then call any function like trunc, round_down, floor or chop_off. They all operate on 1.0 and the result will always be 1.0, no matter how you truncate.
Awesome discussion. It's good to revisit floating point arithmetic oddities every once and a long while. 🤓
Interestingly, using the Decimal type initialized with equivalent string values of 0.99999999999999999 will overcome the issue of converting to a float with the bits turning over to 1.0.
While Decimal type is likely significantly slower than floats, you have greater control over behavior based on precision and rounding contexts.
Take a look at these links for more details:
Have you done the conversion? If yes, you may have noticed that 0.999... appears to be binary 0.111... .
If I haven't made any mistakes, then 0.99...9 with 'k' many '9's will have at least 3k leading '1's in the binary before the first '0' appears. That '3k' is a lower bound for the actual number of '1's.
For 16 '9's you get at least 48 '1's. The mantissa is only 52bits wide, plus one implicit bit. So, you are scratching the limit there.
In effect, the more '9's the more '1's. In the end you will have more '1's than bits in the mantissa and it gets rounded to even (last bit zero). So it becomes de facto 1.0. After which, 'int' applied will return 1.
Edit: upon revisiting my scribblings, make that ceil((ln(5) + (k-1)*ln(10)) / ln(2)) many '1's. [I could have just taken the logarithm instead of estimating]. So, for 16 '9' that is 53 '1's. It then depends on the rest to reach the point when it will be rounded up.
The representation of double precision floating point numbers seems to reserve about 52 bits for the precision (I.e. other than for the sign bit and exponent). 2^52 is approximately 4.5*10^15. So 16 figures (including the non fractional part) seems to be the maximum precision that can be stored in a double precision number using the IEEE 754 standard. I guess therefore that any number less than but approaching 1.0 and containing more than 16 digits is going to be rounded up.
Its just an application of david gay's algorithm,called data truncation and approximation,your CPU only understands binary,so in order to do calculations on your CPU, your programming language has to process your inputs into binary,but for numbers like(1/3),representing this in binary could be quite difficult,as (1/3) = 0.3333333333333333333..till infinity,which cannot be properly represented on the CPU about (0.0101010001111010111),in binary..,so David gay's algorithm helps to truncate this by chopping off the least significant digit,which is then round off the to the nearet decimal ,
Seems like there's a simple explanation: Before int() is called the number is first converted into a double. The double represents the most accurate representation of that number which is eventually 1 as you increase the number of nines. And int(1.0) obviously returns 1.
Grappling with infinity is only hard until you decide that it's not by the way.
Back in the early calculus days people would say: well, `0.999...` is `1-1/(10^n)`, and the fraction tends to 0, so 0.999... = 1. And they'd call it a day.
Then Cauchy came along and gave a formal definition of limits, to make things harder:
"You gimme a function `f(n)` and your proposed limit. Prove that the difference between the two is no more than a gap `ε` of my choosing. That is, I give you an `ε`, you find me `n`."
iff `f(n) - L < ε` for an arbitrary ε>0, then the limit of `f` is `L`.
So because the gap between 0.999... and 1 is arbitrarily small, they may aswell be the same.
0.888... does not tend to 0.9 by the way. I bet you can't find any amount of 8s such that the gap becomes smaller than 0.01.
(0.899... does though)
(by the way)², never believe in the triple dots "...", they are lying. `0.999...` highjacks your brain into thinking everything is ok, but it's not. What does `0.132...` even mean? Nothing.
We're getting very philosophical 😃. What's wrong with the theory that the IEEE 754 standard for double precision floating point numbers only allows for 53 significand bits and hence 16 significant digits and that any representation containing more digits than that is going to be rounded up? 🙆
What I said was really just specific for 0.999... In more general terms, I would expect a rounding behaviour based on the excess bits (54 and following). If they warrant a rounding up, the mantissa, as integral value, will be incremented by 1, which would flip all bits, lowest to highest until and including the first zero bit.
But the point being made is that it all happens before any type conversion, truncation or other program-side rounding takes place.
Point well taken, Schindlabua ☺.