char and unsigned char in C | Sololearn: Learn to code for FREE!
New course! Every coder should learn Generative AI!
Try a free lesson
0

char and unsigned char in C

So I understood that signed char has the range between -127 and 128, unsigned char 0 and 255. But how do signed char sc = 200 and unsigned char uc = 200 represent the same character despite 'sc' now is outside its range? Also if char is only used to store a character then what does it mean to say it has the range from 0 to 255? Is that char has 256 'different values' or char can represent number 0 to 255 (I think the later is wrong but I don't know why)?

24th Feb 2020, 6:04 PM
Manucians Trung
Manucians Trung - avatar
4 Answers
+ 4
signed and unsigned its just a representation of data. You can represent the same data as signed or as unsigned. And it is your task to say a compiler how you will use that data (as unsigned or signed). data can store value in range from 0 to maxValue. maxValue is defined by number of data bits and it is equal 2^n - 1, where n is number of bits. so, for 1 byte n = 8, and maxValue = 2^8 - 1 = 255. so 1 byte stores data in range from 0 to 255 (total 256 values). It is unsigned representation of number (unsigned char). For signed representation most significant bit of the data is used as a sign. it 1 for negative number and 0 for positive. Other bits contain magnitude of number. value is calculated as follows: if sign equals 0 then value = magnitude; else value = magnitude - 2^(n-1) so, for signed char n = 8: value = (sign == 0) ? magnitude : (magnitude - 128); max value of signed char is 127 (sign = 0, maxValue of 7-bits magnitude = 2^7 - 1 = 127) min value of signed char is 128 (sign = 1, magnitude = 0, so value = 0 - 128 = -128) the number 200 will be represented as 200 for unsigned char and will be represented as -56 for signed char. -56 is calculated as follows: sign = most significant bit of 200 = (200 & 128 >> 7) = 1 magnitude = other bits (200 & 127) = 72 value = 72 - 128 = -56 when you store 200 in the signed char variable, a compiler just stores it without any modification despite its representation is out of range. Of course, if you store value to variable that can't be represented by number of bits that a variable has it will be truncated (you can't store 256 in 8-bit variable, if you store it, variable will contain 256 & (2^8 - 1) = 0)
24th Feb 2020, 8:32 PM
andriy kan
andriy kan - avatar
+ 3
There are different charecter encoding standards like ascii, unicode, utf-8.... char datatype size in c is 1 byte only so, With 1 byte, you can represent 256 different values by 1byte. For unsigned char, Total 8bits are used for value where as in signed char the last most significant bit is reserved for sign magnitude.. (0 for positive, 1 for negative). So only 7bits are used for data. If sc=200, means if exceeds 127, then next value is equal to -128. So it repeats.. Since 01111111 +1 10000000 2's compliment of this gives you equal positive value and 256 200 ------ 56 if you assign signed char c=200, it stored as singned char c=-56, => equal to unsigned char c=200
24th Feb 2020, 7:28 PM
Jayakrishna 🇮🇳
+ 1
it’s more correct to say how the compiler represents them. char c = 200; unsigned char *puc = (unsigned char *)&c; signed char *psc = (signed char *)&c; printf("unsigned=%d signed=%d\n", *puc, *psc); // data that stores in the variable c can be printed as both signed and unsigned Guided by the type of variable (signed or unsigned) a compiler uses the appropriate operations for multiplication, division, type casting and so on.
25th Feb 2020, 6:06 AM
andriy kan
andriy kan - avatar
0
So basically signed and unsigned are different only in the way computer store them but their program results are the same right??
25th Feb 2020, 5:23 AM
Manucians Trung
Manucians Trung - avatar