Is there any difference between these?
#define _BV(bit) (1 << (bit))
#define bit(b) (1UL << (b))
You tell us.
_BV is a macro
#define _BV(bit) (1 << (bit))
Too late, as as Coding Badly said.
Thank you for the definitions. If this were an exam about types I probably wouldn't get a good grade.
I tried a few examples and could identify cases where bit() worked while _BV() did not. But I couldn't come up with an example of the opposite.
When would _BV() be a better choice?
I would use bit personally. I used to use _BV but I have a rule these days to steer clear of variables / functions / macros starting with an underscore.
#define bit(b) (1UL << (b))
Conceivably that would work better with larger shifts.
void setup ()
{
Serial.begin (115200);
Serial.println ();
unsigned long foo = bit (24);
unsigned long bar = _BV (24);
Serial.println (foo);
Serial.println (bar);
} // end of setup
void loop () { }
Output:
16777216
0
So it looks like bit() is more reliable.
Thanks, that's what I found too.
There must be some reason _BV() was created.
Or does it predate bit()?
Maybe even clearer is to just write (1<<N) instead of using a macro.
_BV is part of Libc. The intended use is to create bitmasks for manipulating register values. The bitmask is 16 bits which is reasonable for the intended use.
bit is part of the Arduino interface. I assume the intended use is general purpose bit manipulation. The bitmask is 32 bits which seems reasonable to me for the intended use.
[quote author=Coding Badly date=1415317707 link=msg=1951494]
_BV is part of Libc. The intended use is to create bitmasks for manipulating register values. The bitmask is 16 bits which is reasonable for the intended use.[/quote]
I want a bit mask for 8-bit register values. It seems like bit(N) or _BV(N) or 1<<N will work just fine. It's a matter of preference I suppose, but _BV() seems awkward and not very intuitive to me. I like bit() better. Both bit() and _BV() suffer from being cloaked in a macro. What's wrong with just writing 1<<N? It's not longer than writing bit(N) or _BV(N). It seems obvious what it does.
I think I'll go with 1<<N unless there's a compelling argument to use one of the macros.
What's wrong with just writing 1<<N?
They suffer from being cloaked in a macro.
There's agreement here.
Edit:
However, I typed up some macros some time ago for UNO.
TOGGLEdXX whenever I need to change the state of a pin and IMO they make more sense.
#define TOGGLEd2 PIND |= _BV(PIND2) // Toggle digital pin D2
#define TOGGLEd3 PIND |= _BV(PIND3) // Toggle digital pin D3
#define TOGGLEd4 PIND |= _BV(PIND4) // Toggle digital pin D4
#define TOGGLEd5 PIND |= _BV(PIND5) // Toggle digital pin D5
#define TOGGLEd6 PIND |= _BV(PIND6) // Toggle digital pin D6
#define TOGGLEd7 PIND |= _BV(PIND7) // Toggle digital pin D7
#define TOGGLEd8 PINB |= _BV(PINB0) // Toggle digital pin D8
#define TOGGLEd9 PINB |= _BV(PINB1) // Toggle digital pin D9
#define TOGGLEd10 PINB |= _BV(PINB2) // Toggle digital pin D10
#define TOGGLEd11 PINB |= _BV(PINB3) // Toggle digital pin D11
#define TOGGLEd12 PINB |= _BV(PINB4) // Toggle digital pin D12
#define TOGGLEd13 PINB |= _BV(PINB5) // Toggle digital pin D13
jboyton:
Maybe even clearer is to just write (1<<N) instead of using a macro.
If you don't mind it failing for > 16 bits.
Personally I think all this (1 << N) stuff is unnecessarily obscure.
I would rather write:
EIMSK |= bit (INT0);
Than shoving in shifts where you have to stop and look for a moment. Is it shifted the correct way? Did they accidentally use < rather than <<?. What is the operator precedence?
Or even:
bitSet (EIMSK, INT0);
Apart from the Technical issues above, IMHO the purpose of a Macro is twofold:
1 - To provide a fast way for a Software developer to write syntactically correct code
'Nick Gammon' - 07/Nov/2014:
- Is it shifted the correct way?
Did they accidentally use < rather than <<?
What is the operator precedence?"
2 - To make the Source code easier to read, when the code is simpler with 1 very readable line.
'Nick Gammon' - 07/Nov/2014:
- I would rather write:
EIMSK |= bit (INT0);
IMHO again, it is far better to write easily understood code (Which is then - hopefully - easier to maintain), than it is to write super-efficient code that saves 3 bytes and 1 instruction cycle.
On a Microcontroller, you just need to have your code sufficiently efficient for there to be enough spare cycles between all the real-time events that need to be processed.
There is no point sacrificing understanding, so that there are 24 spare cycles, instead of 19 spare cycles and an easy to read application.
In summary, I believe that Developers will find that Macros are great, because they assist them to be more efficient, and can improve the readability of the code.
Regards from 'Down Under', EnigmaTS.