the advantage of using const int instead of just a hardcoded value is that the compiler can verify by type of variable, the compiler will complain if you set a byte to > 255 or an int > 32767 which may save some down the road when changes occur. (it's hard to see this advantage for a pin definition)
(D5 is not routinely defined, BTW. But may be necessary on some platforms?)
I'll put on my curmudgeon hat and note that the #define version will automagically use "the right type" depending on context. Whereas using "byte" or pin_size_t on a 32bit CPU may lead to extra instructions. (although I can't seem to create an example where that actually happens with const, I have seen ARM code with gratuitous "extend byte to word" instructions...)
I obsessed over this(ocd) when I was first starting out.
but really either works fine.
If you wish to be correct then the use of #define for this purpose has been deprecated(i.e. is no longer recommended)
Use either const or constexpr (ES.25: Declare an object const or constexpr unless you want to modify its value later on). The constexpr specifier implies const, and the const qualifier implies internal linkage for global variables, so static is unnecessary. If you're just initializing the constant with a literal integer value, constexpr and const will be practically equivalent, they can both be used in constant expressions. For non-integral types, you might prefer constexpr over const. Marking a variable constexpr can also be useful to guarantee that its value will always be calculated at compile time, which can be important for performance.
Don't use byte. It will be removed, and it's not the right type for a pin number: A pin number is an integer, whereas a byte is an unspecified piece of data or memory. See also std::byte - cppreference.com.
Use pin_size_t if available, it will always be the correct integer type for the target platform, and it clearly shows your intent (P.3: Express intent).
If pin_size_t is not available, and if the pin number is below 256, use uint8_t, or define your own type alias.
And how is the code supposed to figure if pin_size_t exists at compile time to ensure that the code is portable across all core platforms and versions of IDEs?
The type to use becomes more problematic for code that needs to store the users pin configurations in data variables within the class object.
For examples the pin numbers might be configured through a constructor.
Alternatively using something like
decltype(A0) would be more portable as it would grab the type used for the A0 pin which should be the same type as the digital pins.
When do you think will pin_size_t make it into the cores used for Arduino UNO, or some popular 3rd party cores like ESP8266? Or even into the Arduino IDE examples and reference pages?
What is the advantage of using constexpr rather than just const, for simple values (like "16")?
entirely too readable by the uninitiated
decltype(A0) would be more portable
Ah. There you go.
constexpr decltype(A0) led_pin {13};
clear as mud!
(heh. There seems to be a sort of dichotomy - a language moves in the direction of "we must have very carefully specified strong typing with well defined scope and ..." (Ada, C++), and then along comes someone else with a "no, you've made computing completely unapproachable! The language can figure it out and be correct nearly as often as a programmer!" (python, javascript))
Well, I don't actually ever do it that way (or use assignment using = )
I just mentioned using decltype(A0) as it is portable vs pin_size_t which is not portable.
For all the Arduino code I do, I use int for the Arduino pin const/constants
and uint8_t for variable storage within the class to store Arduino pin numbers.
For my libraries I want maximum portability.
The methodology above gives my libraries portability across over 20 IDE versions (all IDEs back to 1.0.1), and all versions of all the Arduino.cc different platform cores and 14 different 3rd party platform cores.
The consistency with the definitions of other values which are constant at compile time.
Might look ridiculous for some, but at least it would work an lot of platforms today.
I gave 9 cores a try:
// "Blink"
//constexpr decltype(A0) led_pin {13}; // OK
const pin_size_t led_pin = 13; // not portable: error: 'pin_size_t' does not name a type; did you mean '__size_t'?
void setup() {
pinMode(led_pin, OUTPUT);
}
void loop() {
digitalWrite(led_pin, HIGH); // turn the LED on (HIGH is the voltage level)
delay(1000); // wait for a second
digitalWrite(led_pin, LOW); // turn the LED off by making the voltage LOW
delay(1000); // wait for a second
}
works on 3 of 9 installed cores. And honestly, these are not my most used ones.
@877
My suggestion is to just pick which ever one you want. #define or const int
Either will work and will generate the exact same code.
There can be symbol name collisions when using #define but there can also be name collisions when a variable name collides with a #define from somewhere else outside of the sketch code.
While there is an attempt by Arduino .cc to move the APIs to use some new types for better type checking, they can never abandon using ints for Arduino pin numbers without breaking everything so const ints should always work.
So, while others may expound on the benefits of doing things one way or another, IMO, it really is just a personal choice.
I would tend to lean to using const int to minimize potential macro issues but it really doesn't matter.
I would not use pin_size_t as it is not portable and will not work on most platforms/cores including those from Arduion.cc
Also, any core that uses/supports pin_size_t will also have to support using int to ensure backward compatibility with all the existing code.
--- bill
In terms of maximum portability and conformance to the Arduino APIs, you would actually have to stick with #define.
This is because the Arduino API documentation NEVER defined the types for their API functions.
Just look at digitalRead() / digitalWrite()
They never specified the type for the pin or value arguments.
IMO, the way they have defined & documented the APIs is broken as it should have specified types.
And attempts to fix it now can be very problematic.
Just look back through the mailing list and issues and see the chaos that was created and the amount of code that broke when they moved HIGH and LOW to being PinStatus enums.
The problems occurred because so much code was abusing the APIs by not strictly following ONLY using the symbols HIGH and LOW and were using ints and sometimes 0 or 1 or true or false which would not compile with the new enums and strict checking.
The final solution was to create a work around for backward comparability by adding function overloads for ints which essentially disabled the new type checking for HIGH and LOW.
So why bother?
For integral types initialized by a literal, there is virtually no difference. For other types or more complex initialization, there are two advantages: you can use them in constant expressions, and you ensure that the value is computed at compile time.
Again, just define an alias, which is perfectly fine for a library or a larger sketch:
// Read: use 'pin_t' as an alias for whatever type 'A0' was declared with
using pin_t = decltype(A0);
No need to use macros. Macros also have a type, and it makes no difference for portability which one you use. If you want the same type as the macro, or if you don't care about the exact type, you can just use auto (this is fine for constants in your sketch, but not for saving pin numbers or pin number arguments).
#define MY_PIN 13 // has type 'int' (rvalue)
const auto my_pin = 13; // also 'int' (lvalue)
I tend to agree with the Core Guidelines on this one, there's no good reason to use macros for simple constants like this one. Just use a constant that follows the language rules for scope etc.
Fully agree.
There's definitely value in strict static type checking, e.g. to catch things like pinMode(INPUT, 13), but unfortunately that's very hard to add after the fact like you mentioned.
Even in Python and JavaScript, many code bases are moving towards type-hinted Python or TypeScript. Without type annotations, it is very hard to write correct functions because you cannot make any assumptions about the arguments you get from the user. And even with good documentation, you cannot expect the user to get it right.
I think that opt-in type deduction (like the auto keyword in C++) is a good middle ground between the verbosity of statically typed languages and the wild west of duck typing in dynamic languages.
Macros do not have a type. cpp macros are simply text substitution.
A type is selected when they are used in certain contexts such a function argument.
My feeling is that for for maximum portability and easy readability it is best to use const int for creating simple symbols for constants unless there is some reason to use a #define
like doing something that can only be done using macros.
For example suppose you have a sketch or library that optionally supports sound.
The sketch could use the existence of a macro that defines the pin to use for the sound as the conditional to enable the sound code.
This allows users to easily configure the code and the code can automagically disable the code at compile time when the feature is not desired.
i.e.
#define sound_pin 2 // comment out to disable sound support
And in the code:
#ifdef sound_pin // check if user wants to enable sound support
// code for sound support
#endif
Keep in mind one of my main goals is MAXIMUM portability so the code works in all environments.
I'm looking at having my library code work across as MANY IDE versions and 3rd party cores as possible.
Even using things like auto are not totally portable as it requires C++ 11 and I don't want to require or depend on that since it isn't always available.
Another example: there is a bunch of stuff that got added in C++ 17 that could make some things easier especially for templates, and do some great magical things to make some things easier for the user but you can't use it because some platform cores aren't using tools that new.
And the #1 reason you can't use it, is that for some odd reason, the newer IDEs force C++ 11 mode on the Arduino.cc platforms even though the gcc tools are much more up to date and could support C++ 17 capabilities.
So in the end, IMO, just using simple native types or types from stdint, and using things like const int for simple constants is the best option for portability.
Every expression in C++ has a type, the “type of a macro” is just the type of whatever expression it expands to, it is independent of the context. For example:
#define SOME_PIN 13
static_assert(std::is_same_v<decltype(SOME_PIN), int>);
#define SOME_LARGER_INTEGER 0xFFFFFFFF
static_assert(std::is_same_v<decltype(SOME_LARGER_INTEGER), unsigned int>);
// (depending on the platform)
All official and popular Arduino Cores support C++11. GCC supports auto since version 4.4 (released in April 2009, over 13 years ago).
Anything before C++11 is ancient, and IMHO you shouldn't be using C++98 or C++03 for any new code.
And if a hardware vendor tells you otherwise: it's been well over a decade, there is no excuse for them still not supporting C++11.
You just made my point.
cpp macros have no type they are just string substitutions.
It is not until the expanded string is used in an expression that any typing may happen.
I think this thread is getting very cluttered and drifting off the original topic of
const int vs #define
And like I previously said, IMO, it really doesn't matter, but const int would be preferred to avoid certain issues that can be created by using #define (macros).
Exactly, then how is it any different from the type of a constant?
I would argue that it does matter. Given the issues and caveats with macros, and the lack of a any significant advantages of macros over constants, we should not teach bad practices to beginners. It is in everyone's interest to follow a common set of rules such as the C++ Core Guidelines.