In almost all of the sample sketches I've looked at, integers (int) are used to are used when referencing the IO pins. Why int and not byte?
Seems to me byte would make a lot more sense as I/O pins can only be positive, (int can be negative) and there's no need for 2^16 / 65,536 with int when 2^8 or 256 would be more than enough.
A variable of type ‘char’ or ‘int’ gives the capability of a negative number.
If the variable is decremented it can be tested to see if the result is negative i.e. <0.
This may or may not be of an advantage.
Best get into the habit and use the smallest byte count type variable.
larryd:
A variable of type 'char' ... gives the capability of a negative number.
char has to be assumed to be neither signed nor unsigned; especially if the code is meant to be portable. I believe the phrase is "implementation defined behavior".
Look at the early Arduino books from before about 2008, before the "#define byte" macro was added most variables were int, char or long, many people are still learning from those books and know no better.
style that was originally used in the examples.
CS Purists didn’t like it because it wasn’t typed. I claim it never needed to be typed… (better NOT typed than WRONG typed?)
In some cases (Serial.read() is one example), using “int” allows allows negative numbers to be used as an error code, while still allowing the full range of 8bit bytes.
“int” is actually supposed to be the “natural size” of the processor, and is more correct for generic variables. But it was conceived before there WERE 8bit CPUs, and 8bits is inconveniently small even if it is the natural size of an AVR, so int became 16bit. Using “uint8_t” or “byte” on some processors (ie ARM) actually results in less efficient code.
westfw:
CS Purists didn’t like it because it wasn’t typed. I claim it never needed to be typed…
Either that or the long list of macro related failures tends to lean us away from #define. (I helped someone with such a problem just this week. Fortunately the problem was trapped by the compiler.)
Add your #define to this snippet…
void BlinkIt( int LED )
{
for ( int i=0 ; i < 10; ++i )
{
digitalWrite( LED, HIGH );
delay( 250 );
digitalWrite( LED, LOW );
delay( 250 );
}
}
Doug101:
In almost all of the sample sketches I've looked at, integers (int) are used to are used when referencing the IO pins. Why int and not byte?
Seems to me byte would make a lot more sense as I/O pins can only be positive, (int can be negative) and there's no need for 2^16 / 65,536 with int when 2^8 or 256 would be more than enough.
Anyone have a good explanation?
Thanks
C programmers tend to use int everywhere, because C automatically promotes things to int if they are smaller. I belive this also includes function arguments, although I'm not 100% sure. On a machine that has a 16-bit bus, using bytes doesn't win you anything in terms of speed: it all goes across the wires simultaneously. Since pin numbers tend to be constants that are subbed in by the compiler, using a byte doesn't make the sketch smaller. And even if it did, the sketch would only be smaller by a byte or so.
Personally, I use byte because the pin io functions are decalared to take a byte.