When using pulseIn() I noticed strange results when I used a timeout paramter of 1/2 second. In the wiring_pulse code it handles the timeout by calculating how many iterations through the code and calls microsecondsToClockCycles(timeout) in wiring.h to get this number. In wiring.h this is defined as follows:
So if F_CPU is 16Mhz for an ATMEGA and one uses a timeout of say 500,000microseconds you end up with ( ((500,000) * (16,000,000 / 1000L)) / 1000L )
Because of the order of operation you will overflow an unsigned long
Changing the order to this #define microsecondsToClockCycles(a) ( ((a/1000L) * (F_CPU / 1000L)) ) will avoid the overflow
this is for arduino-0022
From Wiring_pulse.c:
unsigned long numloops = 0;
unsigned long maxloops = microsecondsToClockCycles(timeout) / 16;
// wait for any previous pulse to end
while ((*portInputRegister(port) & bit) == stateMask)
if (numloops++ == maxloops)
return 0;
BenF:
The re-ordering above will break for values of (a) less than 1000 and so no good as other parts of the core depend on this working.
Yes that makes sense. I made that code change just to get past the problem to see how pulseIn() worked.
BenF:
A simple fix might be to divide timeout by 16 prior to calling microsecondsToClockCycles as in the following:
unsigned long maxloops = microsecondsToClockCycles(timeout / 16);
Then you should be good up to about 4 seconds. Adding a note on this limitation to the pulseIn page wouldn't hurt either.
Good suggestion-it gets divided by 16 afterwards anyway.
If the CPU speed are all integral number of Mhz could also divide by 1M and multiply by timeout but i think your solution is better.
So how does this work to get the fix implemented ? I am completely new to arduino and I want to make sure I am using the "official" fixed code .
What I proposed was just a quick fix to keep you out of trouble, but as is we will loose out on granularity and we should try to improve on this as well before proposing a change.
The following patch might be a better candidate:
unsigned long maxloops = microsecondsToClockCycles(1) * timeout / 16;
For a 16MHz clock, there will be 16 cycles per us and so we should be good up to 2^32 / 16us without overflow. That is equivalent to about 268 seconds (4 minutes and 28s). In comparison, the 022 core version will overflow for timeout’s above 250ms. Granularity will be as for the current implementation across the full timeout range. No impact on other parts of the core.
Thanks -that is better.
However, looking at this through the eyes of a new user, I would wonder why I couldn't wait longer. I could understand a lower limit of 16 us but why not be able to use the full 2^32 for the timeout -over an hour? Wouldn't that be possible if it were changed to:
unsigned long maxloops = microsecondsToClockCycles(1) * (timeout / 16);
If we divide timeout by 16, we will only be able to specify its value in steps of 16. That is any value from 1 to 15 is 0us, 16 to 31 is 16us, 32 to 47 is 32us and so forth.
The current implementation has a lower limit / resolution of 1us (16 cycles at 16 MHz) and we don’t want to change that (to 16us) as this could (and most likely will) break existing code.
Okay-valid point.
I was thinking in terms of apps where I'd be waiting longer for a pulse but this can be handled easily enough by the calling program.
I hope for others sake the documentation gets revised to reflect the change
Thanks again
BenF:
What I proposed was just a quick fix to keep you out of trouble, but as is we will loose out on granularity and we should try to improve on this as well before proposing a change.
The following patch might be a better candidate:
unsigned long maxloops = microsecondsToClockCycles(1) * timeout / 16;
For a 16MHz clock, there will be 16 cycles per us and so we should be good up to 2^32 / 16us without overflow. That is equivalent to about 268 seconds (4 minutes and 28s). In comparison, the 022 core version will overflow for timeout’s above 250ms. Granularity will be as for the current implementation across the full timeout range. No impact on other parts of the core.
I don't see this fix in the release note for 023 or 1.0. Did it make it?