Why when using float numbers, like adding,0.1 +0.2 !=0.3, and gives off a huge decimal number? | Sololearn: Learn to code for FREE!
New course! Every coder should learn Generative AI!
Try a free lesson
0

Why when using float numbers, like adding,0.1 +0.2 !=0.3, and gives off a huge decimal number?

I wish to know how, why this happens and how can I just represent just a simple two units after the decimal. thank you in advance.

12th Sep 2018, 2:47 AM
Jossue
Jossue - avatar
2 Answers
+ 3
It is linked to the accuracy/precision problem with floating point numbers and is a general caveat in programming. Read about it here, for example: https://en.m.wikipedia.org/wiki/Floating-point_arithmetic#Accuracy_problems For displaying such numbers, either rounding or adequate formatting is used, most often.
12th Sep 2018, 5:46 AM
Kuba Siekierzyński
Kuba Siekierzyński - avatar
+ 2
I know how, but it's JavaScript language. Sorry.
12th Sep 2018, 4:26 AM
Email Not Activated