using integer type to define pins, why?

My problem with 'byte' is it doesn't tell me enough about it.

I believe it is a synonym for 'unsigned char' (because its different on different platforms), but I'm never certain and alway find myself doing the double-check.

I much prefer 'uint8_t' as it's easier to know what you're getting and is the same everywhere I want to be.

That's fine, but the core type is "unsigned char" as shown here in stdint.h:

/** \ingroup avr_stdint
    8-bit unsigned type. */

typedef unsigned char uint8_t;

You just have to remember that "unsigned char" is an 8-bit unsigned type.

Using uint8_t is one typedef away from the core type.

This is obviously a style issue, but for me, I prefer to remember that. It's like saying:

"I want to call a dog a MammalWith2LegsThatBarks because that tells me more about it."

OK, but it's really a dog.

I don't want to start a "style war" here, hey next we'll be arguing about GOTO. :wink:

lloyddean:
I much prefer 'uint8_t' as it's easier to know what you're getting and is the same everywhere I want to be.

If they have the typedef I quoted. If not, it's an unsigned char.

Hey, wait, I just shot myself in the foot here. I should be using "unsigned char" and not "byte". Oh well.

You own a two legged dog that barks? Post a video or it isn't true. :smiley:

Lefty

I was very emotional when I wrote that. :slight_smile:

Nick - I'm not sure what your getting at or if you're upset or not.

Just saying for me it's ambiguous as to if it's signed or unsigned (unless I look it up) where as 'uint8_t' it's clear that it's unsigned and is unsigned on all platforms where as 'byte' if expressed at all can be either.

I'm only upset that I thought a dog had two legs.

Well they do have two legs, plus two more. :wink:

I think many people who know better, sometimes just use int instead, when trying to help someone to whom uint8_t or--even byte--would cause additional confusion. One can't learn everything at once...

retrolefty:
Well they do have two legs, plus two more. :wink:

Two legs, squared?

retrolefty:
The only additional question I have about pin numbers is why they are int instead of byte type?

Using int for pins and other minor issues in the examples are because they were written by humans.

retrolefty:

The correct way is to use const int:

Code:
const int potpin = A0;

This way it doesn't take up any space, but if you ever want to change the pin's assignment, it is easy to change in one location, rather than hunting for all of the 0's in the program. Note, in your example, using pin 0 is wrong, since that is a digital pin. The Arduino environment defines A0..A5 (or A9, etc. depending on the target) so that you don't have to remember that on Uno's the first analog pin is 14, on a Leonardo the first analog pin is 18, and on the Mega it is 54.

Are you sure you are not mixing up const with #define statements? A const still takes up space, it just tells the compiler to treat it as a read only value. A #define is just a compile time macro that doesn't take up memory space in the target code. At least that is my take on it?

Lefty

In C you would be correct. However, in C++, const variables set to a constant integer are treated as constants by the compiler, so they can be used for case statements, array bounds, etc. This is one of the subtle differences between C and C++. I was on the original ANSI C committee that produce the ANSI C89 and ISO C90 standard, so I have played language lawyer in the past. Since Arduino uses C++ as its base language, the C++ rules for const are used.

MichaelMeissner:
The correct way is to use const int:

const int potpin = A0;

What's wrong with

#define potpin A0

??


Quote from: MichaelMeissner on Today at 05:57:52
The correct way is to use const int:

Code:

const int potpin = A0;

What's wrong with

#define potpin A0

I can't think of any good reason to use a signed datatype to number pins 8)
Or, in the case of the Arduino, one with a numeric range of greater than 255.

void pinMode(uint8_t, uint8_t);

Nor, it would seem, could the Arduino designers.

Krupski:
What's wrong with

#define potpin A0

Here's one objection. Let's make a simple example that raises an error:

#define potpin A0

void setup ()
  {
  potpin /= 3;
  }  // end of setup

void loop () { }

Error:

sketch_feb03a.ino: In function 'void setup()':
sketch_feb03a:5: error: assignment of read-only variable 'A0'

Looking at the error line:

  potpin /= 3;

You think "what the heck is A0"?

But if you use a constant:

const byte potpin = A0;

void setup ()
  {
  potpin /= 3;
  }  // end of setup

void loop () { }

You get a reasonable error message:

sketch_feb03a.ino: In function 'void setup()':
sketch_feb03a:5: error: assignment of read-only variable 'potpin'

Using #define more than you have to leads to all sorts of obscure messages where you have to sift back through the original defines to see what is really going on. After all, the compiler has just done a text substitution during the pre-processor phase.

AWOL:


Quote from: MichaelMeissner on Today at 05:57:52
The correct way is to use const int:

Code:

const int potpin = A0;

What's wrong with

#define potpin A0

I can't think of any good reason to use a signed datatype to number pins 8)
Or, in the case of the Arduino, one with a numeric range of greater than 255.

void pinMode(uint8_t, uint8_t);

Nor, it would seem, could the Arduino designers.

Due to C's historical roots on the PDP-11 and being designed before prototypes (which came from C++), having char, short, and float as argument types can result in more code than passing ints or doubles. This is because in most ABI's you have to pass a full sized int/double type, and the compiler has to do truncation or conversion of the int/double back to char/short/float in the function header. On the AVR, it is probably the reverse, since the AVR processor is an 8 bit processor, whereas the Arm/x86/powerpc processes ints in larger quantities (32 or 64-bits).

It's an interesting example, Nick, but I honestly can't remember that last time I divided a pin number by...anything, really XD

and being designed before prototypes

I'm pretty sure I had to write prototypes for C.

Krupski:

MichaelMeissner:
The correct way is to use const int:

const int potpin = A0;

What's wrong with

#define potpin A0

??

There are two reasons:

  • On systems with a real source level debugger, instead of print statements, const int's are available for use in debugger expressions, while typically #defines are not.
  • #defines are pure textual substitution, and it doesn't look at the context. So with const int potpin, you can use potpin as a structure element, since the compiler knows it isn't the value at global scope, while with a #define, the substitution is always made.
  • On the other hand, you can use #define values in #if conditional expressions.

AWOL:
It's an interesting example, Nick, but I honestly can't remember that last time I divided a pin number by...anything, really XD

and being designed before prototypes

I'm pretty sure I had to write prototypes for C.

Before the ANSI standard (i.e. the original K&R C, and before that the compilers used by Version 6/7 of UNIX from Bell Labs, and the Berkley distributions) there were no prototypes. The X3J11 committee started in 1983 or so, and was made a US ANSI standard in 1989, and a worldwide standard in 1990. Prototypes, const, and volatile were explicitly taken from C++, and put into C, though as I mentioned some changes were made (const is a lot weaker in C than C++, and we had to add an explicit void in prototypes to distinguish functions where there is no prototype from functions with a prototype that has no arguments).

The C compiler that I wrote the front end for (from scratch in PL/1) for the Data General MV/Eclipse (1981-ish) had no prototypes. Towards the end of my employment at Data General (1989), I was working on the Gnu compiler for the 88000 processor, which already had prototypes added to it.

In C89/90, prototypes are optional, and if there is no prototype, the compiler would essentially create a prototype for you, and the short types (char, short, float) would get promoted to their standard form (int, int, and double). Internally, within the committee, we called this the Miranda rule, after a famous ruling by the US supreme court. Our version of the Miranda rule went like: You have the right to a prototype, if you do not have a prototype, one will be appointed to you by the compiler. We did put in a proviso that declaring functions without a prototype might be removed in a future standard.

Note, C++ requires prototypes, but the IDE 'helps' users by adding prototypes for all user functions. Evidently from another posting, it gets enums wrong, in that it creates a prototype before the enum definition, and the compiler rightfully complains.

It's an interesting example, Nick, but I honestly can't remember that last time I divided a pin number by...anything, really

I'd agree that you haven't, but there are plenty of examples posted each week where someone is assigning a value to a pin number variable, instead of the pin state variable, and then wondering why the code isn't reading the switch, for instance, correctly the second time.