What is the difference between float and double while using printf() function ? | Sololearn: Learn to code for FREE!
New course! Every coder should learn Generative AI!
Try a free lesson
+ 1

What is the difference between float and double while using printf() function ?

I read on the internet that float has 7 decimal digits of precision and double has 15. but when I write the code like this as an example: double x = 12.7987986986986986986; printf("x = %.16f\n",x); and then I run the program it appears on the screen 16 digital digits which are bigger than 15 and I tried many numbers. I even tried %.53f and it worked and printed 53 digital digits printing all the number I already declared and the rest are zeros. so, what is the difference ? :\

16th Jun 2020, 7:33 PM
Mohamed Taha
Mohamed Taha - avatar
2 Answers
0
It's about memory consumption, program optimization, and precision of the mathematical operations you will perform with these variables: float takes 4 bytes, and double takes 8 bytes. When you assign the value to a variable as a constant, then this value is saved in the memory as it is (if the range is respected). When you perform mathematical operations between variables of type "double" and assign the result to another variable of the same type, then ALU will preserve (guarantee) only 15 digits after the decimal point (mantissa). In printf() the %.xf is just a formatter. After the 15th fractional digit it will shows zeros or aleatory digits which may be some remained trash from the adjacent location in RAM. https://en.m.wikibooks.org/wiki/A-level_Computing/AQA/Paper_2/Fundamentals_of_data_representation/Floating_point_numbers https://www.tutorialspoint.com/cprogramming/c_data_types.htm
17th Jun 2020, 2:23 AM
Vasile Eftodii
Vasile Eftodii - avatar
0
thanks a lot
18th Jun 2020, 2:34 PM
Mohamed Taha
Mohamed Taha - avatar