Why use int instead of byte?

Wouldn't be a problem if people stuck to the

#define LED 13

style that was originally used in the examples. :slight_smile:
CS Purists didn't like it because it wasn't typed. I claim it never needed to be typed... (better NOT typed than WRONG typed?)
In some cases (Serial.read() is one example), using "int" allows allows negative numbers to be used as an error code, while still allowing the full range of 8bit bytes.

"int" is actually supposed to be the "natural size" of the processor, and is more correct for generic variables. But it was conceived before there WERE 8bit CPUs, and 8bits is inconveniently small even if it is the natural size of an AVR, so int became 16bit. Using "uint8_t" or "byte" on some processors (ie ARM) actually results in less efficient code.