How fast can I interrupt?

I'm using a light-to-frequency IC, and hooking it to an interrupt pin, to increment an unsigned long. I check the unsigned long every now and then do get an average frequency and thus light level. How fast can I do this?

The IC has options for frequency dividing as well as 3 sensitivity ranges. In my code I already know I want the minimum frequency before switching to the next higher sensitivity to be 100Hz, but I'm not sure how high I should go on the upper end before switching sensitivity down. The chip goes to 1.1MHz before saturating. Is that too much, or should I aim for more like 300kHz?

Good question, I hope someone can answer that by either calculation or a test procedure. The Arduino 16mhz clock speed should be plenty to handle 1.1 Mhz interrupt frequency I would think, but a lot depends on how much or little code is in your ISR routine.

By the way if you would like to share your code when you are done I know I would appreciate it a lot as I purchased L to F sensor from Sparkfun sometime back but haven't thought of a good application yet.


You could build a daylight sensor, and as soon as the frequency is above say 500kHz turn on a LED to indicate that the sun is shining. There may be a market for this, as in some cars you can nowadays find lamps in the dashboard indicating if one is driving forward or god forbid backward. I'm still waiting for a lawsuit about that. "I couldn't possibly have hit his car, as I wasn't driving - The LED was off."

Interrupts have a lot of overhead due to save/restore of context. I looked at the assembly code for an interrupt that just incremented an unsigned long and it was 29 instructions. Optimistically estimate 2 cycles for each instruction and you're looking at 59 cycles, which at 16 MHz translates to 3.625 microseconds, not including time for the AVR to actually recognize and jump to the interrupt. That means top frequency would be about 275 kHz, assuming your processor is doing nothing else but getting interrupted.

Why not just use the input capture to wait for one edge, wait for the next one, then measure the time between the two edges? No interrupts required.

Bear in mind when reading my reply that I'm very new to the world of Arduino. I might be wrong! ;)

It's my understanding that the Atmel processors typically execute one instruction per clock tick. So a clock rate of 16 Mhz gives us, in the best case, 16 million instructions per second.

This is from the datasheet for the AT90USB82 processor; things in parenthesis are from me...

  • The interrupt execution response for all the enabled AVR interrupts is five clock cycles minimum (the processor is fixin' to execute the interrupt)
  • The vector is normally a jump to the interrupt routine, and this jump takes three clock cycles (the processor jumps to the ISR)
  • SREG must be saved and restored (the processor doesn't do this for us and SREG is important)
  • A return from an interrupt handling routine takes three clock cycles
  • When the AVR exits from an interrupt, it will always return to the main program and execute one more instruction before any pending interrupt is served

Those are the things necessary just to get the ISR called. We have not yet added the application stuff (incrementing an unsigned long in BetterSense's case).

Adding those up gives us 5+3+2+3+1 = 14. The absolute maximum number of interrupts per second that can be handled by the AT90USB82 is 16 million instructions per second / 14 instructions per interrupt = 1,142,857 interrupts per second.

At that interrupt rate, there's one instruction per interrupt left for the application and that instruction is executed in the "normal" path not the ISR. :o

So, the answer is, "Yes, it is too much. You should aim for 300kHz."

Sorry to be the bearer of bad news, Brian

It’s not too bad of news. I was figuring on 100kHz orginally until I read the datasheet and found out I had so much room at the top. After reading this information I will probably start with 200kHz. I have to test the chip with my optics after all. With a 14mm objective lens my concern is running out of room at the dim end, and not the bright end. I can always add a light filter.

My application is a very small-angle photographic spot meter. So when I get a frequency reading I have to do some fairly advanced maths to return an EV, which I then run through more stuff to get human-readable camera settings displayed on the LCD. Most of this is already coded. I just need to do the auto-sensitivity scaling bit.

Why not just use the input capture to wait for one edge, wait for the next one, then measure the time between the two edges? No interrupts required.

What is this “input capture”?

I decided keeping running average frequency would be more robust than reading any one oscillation interval. As it is now my code updates the average chip frequency every 1/10 of a second which I judge to be the longest interval that is perceptually instant. When the user pulls the trigger the latest calculated average frequency gets dumped to the rest of the code and displayed on the LCD.

What is this “input capture”?

Have a look at Section 13.6 of the ATmega328 datasheet.

You can still average over oscillation intervals, it just doesn’t require interrupts to maintain accuracy.

That sounds like an interesting way to basically do with hardware what I'm doing with software. I don't know how to implement it in arduino, though. I haven't read about input capture in the Arduino docs.

This discussion has made me worried that there is a shortcoming in my code elsewhere, related to the amount of time that it takes to run my incrementing interrupt function. I'm measuring a time interval by using delay(100), but while delay() is delay-ing, the interrupt will be interrupting very many times. If delay() is a simple, crude function, my delay time might be much different than 100 milliseconds. I'm tempted to rewrite using millis() or something, but then delay() might be using millis() internally to calculate its delay for all I know, and there's nothing to worry about. Anyone know?

This is the heart of my frequency-calculator

    p0=pulses;            //while button is high, constantly refresh frequency
    delay (100);
    freq=(10*(p1-p0)); //at this point this is the raw, actual frequency we are reading. We morph freq later.

and here's my interrupt routine

void pulsecounter(){

If your requirement is to take periodic samples of the frequency generated by the sensor then pulseIn my be the easiest way to do this. The only downside of pulseIn is that it does not let you do anything else while it is measuring the period (as you no doubt are aware, the frequency is equal to 1 divided by the period), but that may not matter for your application.

I'm not using pulseIn in this case, because I don't think it's accurate enough to reliably measure the pulses in a 200kHz square wave, and the whole "can't do anything while measuring" bit is a deal-breaker. I'm happy with my implementation now, but I'm worried about interrupts affecting delay().

Does anyone know where I can get more detailed info on delay()? The docs say

Certain things do go on while the delay() function is controlling the Atmega chip however, because the delay function does not disable interrupts. Serial communication that appears at the RX pin is recorded, PWM (analogWrite) values and pin states are maintained, and interrupts will work as they should.

I'm just wondering if the interrupt routine running at 100+kHz is going to effect the length of delay().

If delay() is a simple, crude function, my delay time might be much different than 100 milliseconds

Delay is not "crude"; it "watches" another ISR-update count tied to the timer0 overflow function (happens approximately every millisecond, but not quite.) So it should be accurate unless you manage to shut out other interrupts for more than a millisecond at a time.

Ok thank you. I didn't mean to accuse anyone of writing crude code (that's my job).