+ 2

Why 0.1+0.2 (see description)

var num1 = 0.1; var num2 = 0.2; console.log(num1+num2); output // 0.30000000000000004

8th Jul 2018, 3:51 AM
Felipe Medeiros
Felipe Medeiros - avatar
4 Answers
8th Jul 2018, 5:47 AM
Burey
Burey - avatar
+ 2
4 in the end??? hmmm...bonus?😅😅😅
8th Jul 2018, 4:24 AM
Jingga Sona
Jingga Sona - avatar
+ 2
The core issue is how a binary computer system of 1s and 0s converts decimal numbers like 0.1 and 0.2 to 1s and 0s. it cant....perfectly. think about it like this...a human can understand that 1/3 + 1/3 + 1/3 = 1 but to a calculator that translates to .33333 + .33333 + .33333 = .999999. (the 3s and 9s never end...except they must as the calculator can only hold so many digits per number) The same problem occurs with numbers less than 1 (decimal system) or any number with a fraction...a decimal point that "floats" in the number. binary computer language can't always represent that number perfectly leaving digits dropped at the end such as 1/3 vs .333333 which may lead to rounding errors. the answer is almost always to calculate with integers which binary CAN represent perfectly....then move the decimal point by dividing by 10 or 100 or whatever power of 10 at the very end.
9th Jul 2018, 12:10 AM
Lisa F
Lisa F - avatar
+ 1
lol Jingga Sona it's not a bonus, kinda say that 😂, Felipe Medeiros , as you already should have known that in memory everything is stored in binary. but there are so many floating point numbers which can't be convert completely into binary, so system took some assumption for those numbers, and due to these assumption, we get this type of value while dealing with floating point numbers.
8th Jul 2018, 4:38 AM
Nikhil Dhama
Nikhil Dhama - avatar