# Python and floats

I wrote a very simple code to show the trouble with floating point numbers. a = 0.1 b = 0 for i in range(3): b = a+b print(b) Here is the link if you want to run/modify it : https://code.sololearn.com/cjYE2W2a0eI7/?ref=app it prints : 0.30000000000000004 should be 0.3 What are the ways you pythonists deal with this ?

4/26/2018 11:05:43 AM

Cépagrave39 Answers

New AnswerWell, it really depends on your application. 1. If it is enough for you to use round() as @🌛DT🌜proposed, you can deal with it, but it will quickly increase errors of calculations if you perform many different operations. 2. It is much better to perform calculations without rounding the result, but only make the number beauty, when you print it out on the screen using format() as @Oma Falk proposed. 3. However, the only TRUE method to deal with decimal numbers very precisely is to use Decimal library as @Jan Markus said. It increases memory consumption and slows down a bit the calculations speed, but it costs the result! You should use it in all serious applications where the precision of numbers is important.

many times python surprises me a lot...😀 Cépagrave can u tell me the reason y u r not using round function... round(b,2)

The "decimal"-module deals with this topic, concerning rounding errors when using floating point arithmetics . For further information refer to https://docs.python.org/3/library/decimal.html and https://pymotw.com/3/decimal/ This is used to do exact calculations e.g. in the field of finance.

So, I decided to test this decimal module. Very interesting ! I will be very scared when using for loops now :-( https://code.sololearn.com/cSVc29NqF3at/#py Please don't hesitate to tell me if something is wrong in this code. It's very strange to see how the difference between python float and decimal after many iterations is not linear, sometimes jumps and then stays quite stable, but most important is so big after around 650 iterations ... Even more amazing : it suddenly goes back down between 675 and 680 iterations. It sure depdends on what operations are done in the loop.

Thank you Xan, I already read about the theory behind it, even though what I read was a really short explanation compared to the pdf you propose here. I'll read it to have a more detailed understanding. But my question is more concrete : How do you deal with it in a simple code ?

@Jan Markus Thank you, I didn't know this module, will give it a try ! Do you have examples of codes where you've used it ?

@Oma Falk Thank you ! That's actually what I first used and was not happy with (check my updated code to understand why), because it prints the same number of decimals, whenever it's exact values or not. round() is better for my needs.

@🌛DT🌜 Thanks ! round() is ok for most uses, true. I modified my code : there still is a small problem with 1.0 instead of just 1

It's not Python, per se; it's all IEEE-754 binary-encoded floats. Python uses the function math.fsum() to track accumulated error, using the Shewchuk algorithm. Docs: "Return an accurate floating point sum of values in the iterable. Avoids loss of precision by tracking multiple intermediate partial sums" The algorithm: Binary Floating Point Summation Accurate to full precision (Python recipe) https://code.activestate.com/recipes/393090 "Completely eliminates rounding errors and loss of significance due to catastrophic cancellation during summation" (three approaches)

@ Cepagrave I must admit that I did not use this yet. I stumbled over it when browsing through the "Python-module-of-the-week"-Website https://pymotw.com/3 .

Nevfy Thank you, this makes a good concluding synthesis to the question that can be useful for other, thus I'll mark it best.

Nevfy Thank you for checking my code (output corrected now). Yes, division by number close to 0 has a strong effect here. I'll definitely use decimal when cooking with numbers in looping pans ! Do you also use other functions from this module ?

Nevfy indeed best answer Cépagrave : a very i,teresti,g post. Hope more pythonists will read (and upvote)

The reason this happens is because computers have to be in binary, so 0.1 in binary is 0.00011 (the last 0011 is reoccurring.) Since computers have a finite amount of bits they can store a number *close* to 0.1, but not *exactly* 0.1. This can be avoided by storing the separate decimal places as variables. It takes some more RAM, but is more accurate.

Python uses the function math.fsum() to track accumulated error, using the Shewchuk algorithm. Docs: "Return an accurate floating point sum of values in the iterable. Avoids loss of precision by tracking multiple intermediate partial sums" The algorithm: Binary Floating Point Summation Accurate to full precision (Python recipe) https://code.activestate.com/recipes/393090 "Completely eliminates rounding errors and loss of significance due to catastrophic cancellation during summation" (three approaches)