In the original C standard the char type was not specified as signed or unsigned. This presented problems when char was used as a number.
Rather than change the behavior of char in existing implementations, the standards committee added two new types, signed char and unsigned char. So now there are three types of char.
So you should never use char as a numerical type, only unsigned char or signed char.
It is even possible that two compilers for a given machine will have different numerical behavior for char.
You can have three overloaded functions, f(char), f(signed char), and f(unsigned char).
For all other integer types there are two types, for example int is the same type as signed int. You can only have two overloaded functions with f(int) being the same as f(signed int).
Edit: I just remembered that with gcc you can have "char" signed or unsigned with flags -funsigned-char or -fsigned-char.
I ran into something very similar to this a little over a year ago and got into some serious C standard "discussions" over
how incrementing chars are handled in loops.
Here is the AVR freaks thread: www.avrfreaks.net/index.php?name=PNphpBB2&file=viewtopic&t=111837
The standard does have some areas that are not specified to allow leniency for implementors.
The problem I had is that depending on how the char variable was declared, automatic vs static
or the level of optimization used or whether additional functions were passed the variable as an argument
varies how the increment of the "char" type is handled and subsequently tested or passed to a sub function.
In my book , it is case of overly aggressive optimization generating incorrect code in certain
circumstances.
However it technically can't be be considered as wrong because
the standard doesn't explicitly state how to handle math on type "char" for all cases.
So the compiler guys can always just claim that the unexpected (wrong) behavior was undefined anyway.
My beef was that while a standard may claim that a particular behavior is unspecified, it should
at least be consistent within the implementation.
Just to add some spice on this thread, another differences between AVR and ARM is that ARM is a 32bit processor, so, if you are used to work with char due to program size limitations, now with the Due, I guess you will need to change your habits,
rbid:
Just to add some spice on this thread, another differences between AVR and ARM is that ARM is a 32bit processor, so, if you are used to work with char due to program size limitations, now with the Due, I guess you will need to change your habits,
That makes for interesting reading. It may also explain why the Due runs my Nokia LCD slower than an 16MHz AVR. I think I may have to add many more #if#else statements.
EDIT:
That improved the speed slightly (switching all chars to ints, and unsigned chars to unsigned ints), though it is still slower, perhaps it is due to the differences between digitalWrite() for the Due, and direct port access for AVR.