The documentation for the char data type is incorrect: char is unsigned in this implementation.
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
{
int myVar = 1;
Serial.println(myVar >= 0 ? "positive" : "negative");
}
{
int myVar = -1;
Serial.println(myVar >= 0 ? "positive" : "negative");
}
{
char myVar = 1;
Serial.println(myVar >= 0 ? "positive" : "negative");
}
{
char myVar = -1;
Serial.println(myVar >= 0 ? "positive" : "negative");
}
{
signed char myVar = -1;
Serial.println(myVar >= 0 ? "positive" : "negative");
}
}
void loop() {
// put your main code here, to run repeatedly:
}
And the output is:
positive
negative
positive
positive
negative
Obviously one can force the var to be signed by way of the signed keyword.
Still, the doc web page is incorrect:
The char datatype is a signed type, meaning that it encodes numbers from -128 to 127. For an unsigned, one-byte (8 bit) data type, use the byte data type.
Thanks for the additional data point. So, how do we interpret the differing behavior?
The gcc man pages note compiler options -funsigned-char and -fsigned-char (I think that's what they're called), but a search through the verbose output of the build process shows neither option.
So I guess it comes down to what's the default behavior of the gcc compiler.
Or perhaps the real lesson here is don't assume anything (at least when is comes to 'signage' [and I'd suspect size as well]).
it is processor dependent, why it is signed - no-one knows exactly. if you want unsigned - you need to create a data type and declare it so.
typedef unsigned char uint8_t;
you should also avoid assuming that the generic data types (int) are even matched to the CPU bus size. on Atmel CPU's with gcc - an 'int' is 16bit even though the CPU is 8bit as the C programming language specification states that an int must be at least 16 bits long. on ARM and x86 you have 32, and in some cases 64.