# Why 0.0/0.0 makes negative nan in C++ ?

I was playing with this code yesterday and I found that 0.0/0.0 makes negative nan(-nan). It makes sense to me that such a mathematical result is undefined, or "not a number". But why is it negative? I havent found an answer to this. Mystery. Furthermore, why is -nan+2*nan=nan? ๐ https://code.sololearn.com/c2q6IYwx6PN3/?ref=app

1/31/2020 5:38:11 AM

ChillPill ๐ถ31 Answers

New AnswerI've tested different compilers and CPU architectures now. It turns out that 0.0/0.0 = -nan is specific to x86-64 CPUs. On ARM CPUs 0.0/0.0 results in positive nan. Might be though that even AMD and Intel CPUs have different behavior. The compiler doesn't make any difference, it's just the CPU that causes this weird behavior. Yet it's interesting that clang optimises 0.0/0.0 to positive nan though the result without optimisations would be -nan.

Avinesh thanks. still after reading this, I dont fully understand why would it give a signed negative bit for 0.0/0.0. Also why does adding negative nan and 2 positive nan give me a positive nan. is not clear to me yet as well.

yeah i agree. so far, as avinesh pointed out, there is a signed bit to a nan number and sometimes it can be printed out. We perhaps should check how nan is constructed in bits to figure out more.

Aaron Eberhardt great thanks for checking and sharing your experience. Have you figured also why -nan+2*nan=nan?

Aaron Eberhardt Thanks for delving in so deep about nan. To resume so far, we couldnt find a logic to the sign of nan. We just know it can depend on the architecture and that in the end the sign of nan doesnt really matter for calculations. Any calculations done with nan will return nan(with/with negative sign) If someone else finds another logic to nan signage let us know. Until then, Ill mark Aaron's answer as best.

I know all of them in JavaScript will print them same.. Except 0.0/0.0 : NaN not -NaN // 1/0 infinity 0/0 nan 0/1 0 1/1 1 inf-inf nan inf+inf inf inf+nan nan nan(+-*/)anything nan nan==nan false

Sonic I don't think it's the compiler, it's rather the CPU. But I'll have a look on godbolt.org later. ChillPill I can confirm that the signed bit is set to negative. 0.0 however has no negative sign. You can print the float as binary with this code of mine: https://code.sololearn.com/ceat24D85Zxf/?ref=app

ChillPill just shared what I could find most relevant to the question. Neither I understand the logic behind that.

For the second part of the question it seems to treat nan like a variable, which is very weird.

My guess is that some compilers always set the sign bit (to negative) when a result is nan, but of course it's a guess. It may be worthwhile testing this on several compilers.

On SL compiler, it's consistently -nan and not probabilistically positive or negative. Was just testing if it could be sometimes positive. https://code.sololearn.com/cGoLP2m1OkGN/?ref=app

Now the question is what CPUs or Operating systems set the sign bit and which ones don't I guess.

ChillPill It seems like calculations with nan don't follow a certain logic except that they always return positive or negative nan. Probably there really isn't very much logic behind it because the sign of the nan actually doesn't matter. There's simply no reason why a CPU should process bits it doesn't care about (this also explains the weird behavior with 0.0/0.0). The problem somehow is that nan doesn't depend on a sign yet floats have a fixed sign bit that undesirably influences nan, too. I think cout and printf should actually ignore the sign of nan: nan can not have a sign, it's not a number.

My understanding of NaN is the only part of NaN with any significance is that itโs a NaN, there is no logic to how NaN is created, and it is a mistake to treat the leftmost bit as a sign bit or any other bit of it as having any significance. Whether this matches the specification of NaN or not, I donโt know.

yeah ChillPill . Nice discovery. i go with Aaron Eberhardt 's explanation. Sometimes different CPU architectures can whine the results. For example in computing as a whole, Arithmetic computation is usually converted to binary and the answer(result) is converted back to human readable format.