Bit complicated. Computers only know numbers, humans only know text.
So we need a way to turn text into numbers and the other way around. These days we do so with unicode, and java uses two bytes per char so it can store a single unicode UTF-16 codepoint.
Before unicode there was ASCII/ISO-8859/Windows-1252. They used one byte per character. That means there are only 256 possible characters, which is enough for the latin alphabet, but not enough for japanese and russian and emoji. And that's what C calls `char`.
Java does have a 1 byte-datatype called `byte`.
C does have multiple-byte characters called `wchar_t`.
Because java is unicode based and c is ASCII code based and java contains 18 languages whereas c contains only 256 character.
256 is represented in 1 byte but 65535 can't represent in 1 byte so java char size is 2 byte or c char size is 1 byte.
Java supports unicode characters which covers the letters from a lot of international languages and hence the size required is 2 bytes whereas in C the ASCII characters just include the english language letters which pretty much fits within 1 byte.
There is something with character encoding as well that I do not remember.
Java support more than 18 international languages so java take 2 byte for characters, because for 18 international language 1 byte of memory is not sufficient for storing all characters and symbols present in 18 languages. Java supports Unicode but c support ascii code. In ascii code only English language are present, so for storing all English latter and symbols 1 byte is sufficient (in C).