Why ~0 is -1 ? | Sololearn: Learn to code for FREE!
New course! Every coder should learn Generative AI!
Try a free lesson
+ 1

Why ~0 is -1 ?

Why ~0 is -1 and not 1 ?

20th Feb 2020, 8:34 AM
Pushpak Sarode
Pushpak Sarode - avatar
4 Answers
+ 9
binary representation of 0 in 8 bits: 00000000 after bitwise NOT ~ operation all bits are inverted : 11111111 11111111 in binary represents signed -1 of decimal system. (the most significant bit is 1 ,therefore number is negative) I'm using 8 bit example but it is 2 byte for 16bit system ,4byte for 32 bit system. This article can help you understand better : https://medium.com/@LeeJulija/how-integers-are-stored-in-memory-using-twos-complement-5ba04d61a56c
20th Feb 2020, 9:00 AM
🇮🇳Omkar🕉
🇮🇳Omkar🕉 - avatar
+ 5
bahha🐧 , your point about treating integer literal as unsigned is correct. but unfortunately what you stated about getting 1 as output is wrong. when you compliment all bits of 0 you'll get largest possible unsigned int represented by UINT_MAX macro , 0xffffffff in hex ,4294967295 in decimal for 32 bit system. and it is obviously not 1.
20th Feb 2020, 9:25 AM
🇮🇳Omkar🕉
🇮🇳Omkar🕉 - avatar
+ 1
in addition to the explanation above if you need 1 you have to treat it as unsigned value ~0U to get 1
20th Feb 2020, 9:05 AM
Bahhaⵣ
Bahhaⵣ - avatar
+ 1
🇮🇳Omkar🕉 you are right, thanks. you would have to use absolute value to get one. #include <stdio.h> #include<stdlib.h> unsigned n = ~0; int main() { printf("%d",abs(n)); return 0; }
20th Feb 2020, 9:47 AM
Bahhaⵣ
Bahhaⵣ - avatar