int(0.99999999999999999) == 1. Why exactly is that? | Sololearn: Learn to code for FREE!
New course! Every coder should learn Generative AI!
Try a free lesson
+ 29

int(0.99999999999999999) == 1. Why exactly is that?

In most cases, when you int a float (or in C when you (int) a double), the effect seems to be that the decimals are just 'cut off'. Funnily, this is not always true: In the given example, the result will be 1. What exactly is going on? I have figured out that it happens just beyond the precision cutoff point. So a double with 16 nines as decimals will be 'floored', add one more, and it will be 'ceiled'. For floats, the borderline is 7/8. Can someone explain, and in a way that doesn't necessitate deeply understanding all the hardware intricacies of doubles and floats?

28th Jan 2020, 10:40 AM
HonFu
HonFu - avatar
68 Answers
+ 18
~ swim ~, I wonder how this 'too precise is precisely determined. Does a double know the places after the 16th? Wouldn't it have to know in order where to round to? Wouldn't it need precision for this in the first place? I just tried what you said. 0.99999999999999995 becomes 1, but 0.99999999999999994 becomes 0. If we add another digit, this pattern seems to repeat: 0.999999999999999945 becomes 1, while 0.999999999999999944 becomes 0. 🤔
28th Jan 2020, 1:22 PM
HonFu
HonFu - avatar
+ 16
blackwinter that's a mathematical answer but doesn't explain why there is a precision cut off point.
28th Jan 2020, 10:58 AM
Sonic
Sonic - avatar
+ 13
blackwinter, now I'm not sure anymore, would the double specifics be easier to master? 😂 I've read about half of it, but somehow my brain can't get past the '0.9999 == 1 - wait what?' point yet. I guess I'm a case of the 'education problem' they mention... 🤔
28th Jan 2020, 11:01 AM
HonFu
HonFu - avatar
+ 13
I did some research for Python. Hope it helps. Thou official documentation is uncertain. https://code.sololearn.com/cj9eYbj65J8J/?ref=app
28th Jan 2020, 12:22 PM
Mihai Apostol
Mihai Apostol - avatar
+ 13
I agree with ~ swim ~ . In the docs(https://docs.python.org/3/tutorial/floatingpoint.html#tut-fp-issues) it is written - "...so Python keeps the number of digits manageable by displaying a rounded value instead..." So my best *guess* is that after a number exceeds a specific number of bits, python rounds it off. Note: I might be wrong here. This is just my guess.
28th Jan 2020, 12:23 PM
XXX
XXX - avatar
+ 10
Awesome discussion. It's good to revisit floating point arithmetic oddities every once and a long while. 🤓 Interestingly, using the Decimal type initialized with equivalent string values of 0.99999999999999999 will overcome the issue of converting to a float with the bits turning over to 1.0. While Decimal type is likely significantly slower than floats, you have greater control over behavior based on precision and rounding contexts. Take a look at these links for more details: https://code.sololearn.com/c9mQEFxn9b07/ https://docs.python.org/3/library/decimal.html
5th Feb 2020, 9:27 AM
David Carroll
David Carroll - avatar
+ 9
Hopefully it won't double over. 😂
28th Jan 2020, 2:41 PM
HonFu
HonFu - avatar
+ 8
Mihai Apostol, so my suspicion that I can't rely on 'int-truncating' *was* justified after all, even if my provided example was caused by something else, hm?
4th Feb 2020, 6:34 PM
HonFu
HonFu - avatar
+ 7
Grappling with infinity is only hard until you decide that it's not by the way. Back in the early calculus days people would say: well, `0.999...` is `1-1/(10^n)`, and the fraction tends to 0, so 0.999... = 1. And they'd call it a day. Then Cauchy came along and gave a formal definition of limits, to make things harder: "You gimme a function `f(n)` and your proposed limit. Prove that the difference between the two is no more than a gap `ε` of my choosing. That is, I give you an `ε`, you find me `n`." iff `f(n) - L < ε` for an arbitrary ε>0, then the limit of `f` is `L`. So because the gap between 0.999... and 1 is arbitrarily small, they may aswell be the same. 0.888... does not tend to 0.9 by the way. I bet you can't find any amount of 8s such that the gap becomes smaller than 0.01. (0.899... does though) (by the way)², never believe in the triple dots "...", they are lying. `0.999...` highjacks your brain into thinking everything is ok, but it's not. What does `0.132...` even mean? Nothing.
30th Jan 2020, 11:31 AM
Schindlabua
Schindlabua - avatar
+ 7
We're getting very philosophical 😃. What's wrong with the theory that the IEEE 754 standard for double precision floating point numbers only allows for 53 significand bits and hence 16 significant digits and that any representation containing more digits than that is going to be rounded up? 🙆
30th Jan 2020, 11:59 AM
Sonic
Sonic - avatar
+ 7
Jay Matthews I believe that converting a float to int in Python results in rounding down to the nearest integer. However, the floating point with seventeen 9s is converted to 1.0 before passed to int() as an input. That's why all results are 0 until the last, which is converting 1.0 to int.
15th Jul 2021, 7:06 AM
David Carroll
David Carroll - avatar
+ 6
Sonic, yeah, or why the cutoff doesn't lead to all that decimal being cut off.
28th Jan 2020, 11:02 AM
HonFu
HonFu - avatar
+ 6
The representation of double precision floating point numbers seems to reserve about 52 bits for the precision (I.e. other than for the sign bit and exponent). 2^52 is approximately 4.5*10^15. So 16 figures (including the non fractional part) seems to be the maximum precision that can be stored in a double precision number using the IEEE 754 standard. I guess therefore that any number less than but approaching 1.0 and containing more than 16 digits is going to be rounded up.
28th Jan 2020, 2:03 PM
Sonic
Sonic - avatar
+ 6
Its just an application of david gay's algorithm,called data truncation and approximation,your CPU only understands binary,so in order to do calculations on your CPU, your programming language has to process your inputs into binary,but for numbers like(1/3),representing this in binary could be quite difficult,as (1/3) = 0.3333333333333333333..till infinity,which cannot be properly represented on the CPU about (0.0101010001111010111),in binary..,so David gay's algorithm helps to truncate this by chopping off the least significant digit,which is then round off the to the nearet decimal ,
28th Jan 2020, 6:02 PM
Codebeast**
Codebeast** - avatar
+ 6
Seems like there's a simple explanation: Before int() is called the number is first converted into a double. The double represents the most accurate representation of that number which is eventually 1 as you increase the number of nines. And int(1.0) obviously returns 1. https://code.sololearn.com/cBNa2y6Lkoi6/?ref=app
29th Jan 2020, 7:53 AM
Aaron Eberhardt
Aaron Eberhardt - avatar
+ 6
Katz321Juno First... you need to realize that the actual inputs for "Wrapping floats with decimals" are first converted from floats as seen in "Floats with 16 and 17 9s". I assume the first Decimal() value, with sixteen 9s gets carried out 53 or so places due to binary conversion. The second Decimal() value is based on the seventeen 9s converted to 1.0, which then converts to 1.
15th Jul 2021, 6:46 AM
David Carroll
David Carroll - avatar
+ 5
You could be perfectly sure instead if you had read just the headline of my original post. Because this was my question to begin with, why this happens.
29th Jan 2020, 6:54 PM
HonFu
HonFu - avatar
+ 5
Sonic HonFu I think when it comes to floats it's even simpler than that, you write a number in decimal and it has to be converted to binary. Of course you can write arbitrarily precise decimal numbers in your source file and they might never fit in a float so you will get rounding behaviour. Maybe it's worth noting that you don't need a cast to int as you wrote in your question and comparing to a float 1.0 should be equal too. I would imagine the details of how the rounding happens depend on the algorithm that is used to transform your decimal number to binary and it shouldn't reeeally be about IEEE 754, or even all nines for that matter, you get rounding with most numbers. For example 0.50000004 "rounds" up to 0.500000059604644775390625 (in javascript and most languages anyway?) Someone else mentioned some Gay's algorithm, I've never heard of it, but maybe that is a standard way of doing things.
4th Feb 2020, 9:55 AM
Schindlabua
Schindlabua - avatar
+ 5
Coder Kitten yep something like this must be happening. It occurred to me that we can just check though since any standard library will have an `atof` function or similar to turn strings into floats and a compiler would probably just use that! I looked around a bit, and Codebeast** probably deserves the green tick here, David Gay's algorithm seems to be one of the most common and accurate to convert decimal numbers to floats and the other way around: https://ampl.com/netlib/fp/dtoa.c Approximately half of the code is dedicated to rounding and I don't really understand any of it. The "How I would badly have done it myself"-award goes to https://github.com/GaloisInc/minlibc/blob/master/atof.c It's pretty straightforward code wise but it will round(?) 0.99999999999999999 to 1.000000000000000881178... so it even goes above 1, kinda crazy. EDIT: And according to Gay's code, IEEE does define how floats should be rounded! But I don't really understand how or where that comes into effect.
4th Feb 2020, 3:23 PM
Schindlabua
Schindlabua - avatar
+ 5
Schindlabua, that's probably the issue with the 'green tick' deserved - I specifically asked it to be explained in a way that *doesn't* necessitate any deeper double-type understanding. 😁 I come to realize that I'm probably being a bit of a tough customer about this - involuntarily! Maybe it's just time that I buckle up and finally try to understand all of the messiness more deeply... From a purely practical perspective, what made me ask this question in the first place was this: In my code, I have been lazily just 'inting' stuff with the goal of 'cutting off the decimals'. Then I stumbled across that 0.999... business and started to think - wait, am I playing with fire here, and sometimes, just sometimes, values will be *rounded up* without my knowledge? From the last few posts I would assume that this *doesn't* happen, because the problem occurs before I even have a float - by parsing a too-long decimal number string, so that the double specs round it up for me.
4th Feb 2020, 3:48 PM
HonFu
HonFu - avatar