Why 0.1 + 0.2 == 0.3 is false ??

Why 0.1 + 0.2 == 0.3 is false ?? https://code.sololearn.com/cn9WNbcA788W/?ref=app

9/21/2019 3:57:54 PM


4 Answers

New Answer


Because double has 15 decimal digits of precision means: a=0.1;//0.100000000000001 b=0.2;//0.200000000000003 c=0.3;//0.300000000000004 But in float data type floating point is not change. change data type double to float


https://www.sololearn.com/discuss/1392436/?ref=app https://www.sololearn.com/discuss/1344720/?ref=app https://www.sololearn.com/discuss/1288636/?ref=app if you want a great explanation for this then view the video in following link: https://javascriptweekly.com/link/77414/83ac73f344


Hello World and burey thanks i got the answer


Because floating point arithmetic is not exact due to the representation of floats in computers not being exact.