+ 2
Why 0.1+0.2 (see description)
var num1 = 0.1; var num2 = 0.2; console.log(num1+num2); output // 0.30000000000000004
4 Answers
+ 10
some of the links are tagged with python but this behaviour originates from the same reason in both python and javascript
https://www.sololearn.com/discuss/1344720/?ref=app
https://www.sololearn.com/discuss/1288636/?ref=app
https://www.sololearn.com/discuss/1093191/?ref=app
https://www.sololearn.com/discuss/711763/?ref=app
+ 2
4 in the end??? hmmm...bonus?đ
đ
đ
+ 2
The core issue is how a binary computer system of 1s and 0s converts decimal numbers like 0.1 and 0.2 to 1s and 0s. it cant....perfectly.
think about it like this...a human can understand that 1/3 + 1/3 + 1/3 = 1
but to a calculator that translates to
.33333 + .33333 + .33333 = .999999. (the 3s and 9s never end...except they must as the calculator can only hold so many digits per number)
The same problem occurs with numbers less than 1 (decimal system) or any number with a fraction...a decimal point that "floats" in the number.
binary computer language can't always represent that number perfectly leaving digits dropped at the end such as 1/3 vs .333333 which may lead to rounding errors.
the answer is almost always to calculate with integers which binary CAN represent perfectly....then move the decimal point by dividing by 10 or 100 or whatever power of 10 at the very end.
+ 1
lol Jingga Sona it's not a bonus, kinda say that đ,
Felipe Medeiros , as you already should have known that in memory everything is stored in binary.
but there are so many floating point numbers which can't be convert completely into binary, so system took some assumption for those numbers,
and due to these assumption, we get this type of value while dealing with floating point numbers.



