@orionwl chars can be signed (on intel)? What does the sign bit do?
@dammkewl it means that the range is -128…127 instead of 0…255
the fact that data types can be different on different architectures, not only in size, but also in whether they can be negative or not, is one of the totally bizarre things in C that has probably cost the world more, in debugging and consequent patching than the moon landing…
@orionwl I'm not specifically familiar with C & its intricacies but shouldn't the language define the concept of 'char' regardless of CPU architecture?
I know some assembly basics but only of simplified architectures so I don't have any knowledge on opcodes specifically recognizing a char type, so is that the case with current day architectures? Or is there something between C code and the assembly code it needs to compile to (i.e. the compiler), that depends on architecture specific behaviour?
@dammkewl to be honest I have no idea why it is the case
it's not that x86 assembly 'likes' signed bytes better than ARM/RISC-V
in all other languages I've worked with, types sometimes depend on the word-size of the platform, but definitely the *kind* of type (signed integer, unsigned integer, float, ...) doesn't depend on the platform
it's seems a peculiarity that has historically grown that way, and never converged over time