Please explain why output is true. | Sololearn: Learn to code for FREE!


Please explain why output is true.

#include <stdio.h> int main() { float a=0.3; double b = 0.3; if(b < a){ printf("true"); }else printf("false"); return 0; } //true

6/18/2021 2:17:41 PM

Tushar Kumar 🇮🇳

11 Answers

New Answer


Tushar Kumar please add to your code before if statement this: printf("a = %.10f",(double)a); printf(", b = %.10f",b); //a = 0.3000000119, b = 0.3000000000 //true then you will see what will be done before if comparisation - implicite conversion (Type Casting) to the same data type, here to double.


When compared with double float is less accurate.


The compiler cannot compare apples with oranges so here decides to convert the float type to the more precise double. This results in the other value after the type conversion. Similar programming errors have led to rocket crashes or severe consequences in the past. The programmer must be aware of how the calculations work digitally to counteract this accordingly.


Tushar Kumar The one with more precision will not always be the smallest. It depends on the value you are assigning. If you want you can take a look at my answer to this very similar question where I explained it in more detail.


Here this code has followed order of precedence ig ( bool->char->int>float->double)


JaScript didn't understand please little more clarification. Also, When I change the value to a=0.03 and b=0.03 then it become false. Why?


JaScript Oh thank you.. now I understand what actually inside going on. Also it means the one whose precision is more accurate will be the smallest? that's why double is not getting printed right?


JaScript I don't want any more rocket crash gonna dig it into depth xD. Thank you Sir


Hape I went through your answers.. it's really helpful at least it cleared my doubt that if float 0.3 is greater than double 0.3 doesn't mean it will always be greater in this case, like if when float & double is 0.7 then here double 0.7 is greater. So it differes from number to number. Now I'm wondering what's the good way of comparing precise decimal numbers? Should compare always with same data type?


Tushar Kumar there are several ways to handle this problem. In general you can not exactly compare all real numbers with each other using a computer. That is just not possible (mathematically). This problem actually just arises from the fact that you are thinking in numbers nicely representable in the decimal number system and a computer is thinking in numbers nicely representable in binary (at least computers using binary floating point numbers which most computers do). So if you really want (or need) to compare decimal floating point numbers just don't use floats at all and instead use integers. This results in sort of implementing a custom decimal floating point or fixed point arithmetic. This sounds more complicated than it is. Just store 3 instead of 0.3 and remember somewhere that you have to divide by ten to get the actual value. Then you can compare decimal numbers "exactly".


They are both thesame so the code can't print False because 0.3 has been assigned to both a and b . So it can only print True as the output