@dammkewl it means that the range is -128…127 instead of 0…255
the fact that data types can be different on different architectures, not only in size, but also in whether they can be negative or not, is one of the totally bizarre things in C that has probably cost the world more, in debugging and consequent patching than the moon landing…
@dammkewl if you happen to use 'char' (and not 'signed char' or some typedef of that) accidentally if you need a small integer that say, needs to be negative to signal errors…
you've just walked with open eyes into one of the many traps that the complete disaster that the C language is, congratulations !
yes, you'll still make these mistakes despite more than 20 years of experience programming
@orionwl I'm not specifically familiar with C & its intricacies but shouldn't the language define the concept of 'char' regardless of CPU architecture?
I know some assembly basics but only of simplified architectures so I don't have any knowledge on opcodes specifically recognizing a char type, so is that the case with current day architectures? Or is there something between C code and the assembly code it needs to compile to (i.e. the compiler), that depends on architecture specific behaviour?
@dammkewl to be honest I have no idea why it is the case
it's not that x86 assembly 'likes' signed bytes better than ARM/RISC-V
in all other languages I've worked with, types sometimes depend on the word-size of the platform, but definitely the *kind* of type (signed integer, unsigned integer, float, ...) doesn't depend on the platform
it's seems a peculiarity that has historically grown that way, and never converged over time