Why use int instead of byte?

In almost all of the sample sketches I've looked at, integers (int) are used to are used when referencing the IO pins. Why int and not byte?

Seems to me byte would make a lot more sense as I/O pins can only be positive, (int can be negative) and there's no need for 2^16 / 65,536 with int when 2^8 or 256 would be more than enough.

Anyone have a good explanation?

Thanks

Anyone have a good explanation?

People are lazy and just use int out of habit
People don’t know the difference
People follow bad examples

Mind you, in the average sketch it makes no difference anyway

Using const would make good sense too but
People are lazy and just use int out of habit
People don’t know the difference
People follow bad examples

Mind you, in the average sketch it makes no difference anyway

Historical artifact.

In either case, a named constant (byte) is the best choice.

A variable of type ‘char’ or ‘int’ gives the capability of a negative number.
If the variable is decremented it can be tested to see if the result is negative i.e. <0.
This may or may not be of an advantage.

Best get into the habit and use the smallest byte count type variable.

larryd:
A variable of type 'char' ... gives the capability of a negative number.

char has to be assumed to be neither signed nor unsigned; especially if the code is meant to be portable. I believe the phrase is "implementation defined behavior".

@CB

"implementation defined behavior"

Well, this is the last platform that I will probably learn … :confused:

Better yet, use the types from stdint.h and it will be explicitly clear what type each variable is.

larryd:
Well, this is the last platform that I will probably learn …

Ah, but, even with the Arduino platform you can run into trouble...

• If you are using avr-gcc to build for AVR based boards like the Uno then char is signed.

• If you are using the ARM compiler to build for ARM based boards like the Zero then char is unsigned.

Because you have to type one extra character for byte
ahhh.. people are lazy. :slight_smile:

Look at the early Arduino books from before about 2008, before the "#define byte" macro was added most variables were int, char or long, many people are still learning from those books and know no better.

Wouldn’t be a problem if people stuck to the

#define LED 13

style that was originally used in the examples. :slight_smile:
CS Purists didn’t like it because it wasn’t typed. I claim it never needed to be typed… (better NOT typed than WRONG typed?)
In some cases (Serial.read() is one example), using “int” allows allows negative numbers to be used as an error code, while still allowing the full range of 8bit bytes.

“int” is actually supposed to be the “natural size” of the processor, and is more correct for generic variables. But it was conceived before there WERE 8bit CPUs, and 8bits is inconveniently small even if it is the natural size of an AVR, so int became 16bit. Using “uint8_t” or “byte” on some processors (ie ARM) actually results in less efficient code.

westfw:
CS Purists didn’t like it because it wasn’t typed. I claim it never needed to be typed…

Either that or the long list of macro related failures tends to lean us away from #define. (I helped someone with such a problem just this week. Fortunately the problem was trapped by the compiler.)

Add your #define to this snippet…

void BlinkIt( int LED )
{
  for ( int i=0 ; i < 10; ++i )
  {
    digitalWrite( LED, HIGH );
    delay( 250 );
    digitalWrite( LED, LOW );
    delay( 250 );
  }
}

Use a typed constant instead. Trouble averted.

westfw:
(better NOT typed than WRONG typed?)

Has merit.

const byte LED = 1034;

void setup() 
{
  pinMode( LED, OUTPUT );
}

void loop() {}

No warning or error. Ugh.

Pin numbers are numbers, int is for numbers, byte is for geeky computery stuff.

I wouldn't say it's a "good" explanation but it is one I've heard more than once.

Steve

You seem to have warnings off then.

foo.c:2:18: warning: large integer implicitly truncated to unsigned type [-Woverflow]
 const byte LED = 1034;
                  ^

as far as I know, on my project I'm using #Define

reason is that #Define takes no memory, as compared to a int, char etc..

PacificThunder:
as far as I know, on my project I'm using #Define

reason is that #Define takes no memory, as compared to a int, char etc..

That's jolly clever.

PacificThunder:
as far as I know, on my project I'm using #Define

reason is that #Define takes no memory, as compared to a int, char etc..

Does const int myPin = 13; take any memory? :slight_smile:

oqibidipo:
You seem to have warnings off then.

Ah yes. So I do. Thank you.

westfw:
(better NOT typed than WRONG typed?)

Wrong size is caught if warnings are enabled.

Making a typed constant unsigned may force an unnecessary promotion.

So, "maybe but rarely" seems like the right answer.

Doug101:
In almost all of the sample sketches I've looked at, integers (int) are used to are used when referencing the IO pins. Why int and not byte?

Seems to me byte would make a lot more sense as I/O pins can only be positive, (int can be negative) and there's no need for 2^16 / 65,536 with int when 2^8 or 256 would be more than enough.

Anyone have a good explanation?

Thanks

C programmers tend to use int everywhere, because C automatically promotes things to int if they are smaller. I belive this also includes function arguments, although I'm not 100% sure. On a machine that has a 16-bit bus, using bytes doesn't win you anything in terms of speed: it all goes across the wires simultaneously. Since pin numbers tend to be constants that are subbed in by the compiler, using a byte doesn't make the sketch smaller. And even if it did, the sketch would only be smaller by a byte or so.

Personally, I use byte because the pin io functions are decalared to take a byte.