Windows Programming Version 5:
Theoretically, a byte in C language may be longer than 8 bits, but for most people, a byte (and thus a char) is 8 bits wide.
Who is not wrong in saying this sentence? I think it should be like this:
One of the c languages
CharacterMay be longer than 8 bits
I came here just because of some doubts, and now I finally know something about it.
However, I’m not sure if the original text you posted and what I described below are the same meaning. If possible, we can discuss it in the comment area.
First, post the original text of “Primer Plus”:
Many character sets contain far more than 127 or even far more than 255 values, such as Japan’s kanji character set. balabala
A platform using one of the above sets as the basic character set should use a 16-bit or even 32-bit char representation method. C positions a byte as the number of bits used by the char type. Therefore, on such a system, a byte mentioned in the C language is 16 bits or 32 bits instead of 8 bits.
Note that there is a premise that “one of the above sets is adopted as the basic character set”. In other words, if you use some large character set (such as Unicode character set, more than 96000 characters) as the basic character set platform, the C language will define bytes as 32 bits. However, most systems use ASCII as the basic character set, so you can define bytes as 8 bits by default for C language on these systems.