When you enter an ISR, the global interrupt enable flag bit in SREG gets cleared, and when you exit the ISR, it gets restored. This means that no other ISRs will be executed while you are in the current ISR. The way around this is to re-enable interrupts in your ISR using the sei() command (or you can declare your ISR with a special attribute that will cause it to automatically re-enable interrupts at the very start, even before it pushes data onto the stack, but I don’t remember the exact syntax for accomplishing this).
Note that if you re-enable interrupts in your ISR, you could potentially run the risk of your ISR interrupting itself. If this happens repeatedly, you pretty much have an infinite-recursion situation that will quickly overflow your stack. The solution is to explicitly disable the current ISR trigger before re-enabling the global interrupt flag. For example:
TIMSK1 &= ~(1 << TIOE1); // disable timer1 overflow interrupt
sei(); // re-enable global interrupts so that other ISRs can execute
do some stuff;
TCNT1 = 0;
TIFR1 |= 0xFF; // clear any pending timer1 overflow interrupts
TIMSK1 |= 1 << TIOE1; // re-enable timer1 overflow interrupts
// the previous lines make sure that we can now get out of this ISR
// without this ISR interrupting itself
If you just want to time things in an interrupt, you can rely on the timer count registers TCNTx. Timers will continue to run while you are in the ISR; it’s only their associated interrupts that will be disabled by default. So if you want to wait for a specific length of time and you know that timer0 is running at a specific clock speed, you could do something like:
unsigned char time = TCNT0;
while (TCNT0 - time < 100) // delay for a certain amount of time
other code here
Or another option is to use a loop delay. The file <util/delay.h> gives you the function _delay_ms() and _delay_us() that use the clock speed defined by F_CPU to achieve delays that will be correct independent of your clock speed. Unfortunately these delays are restricted to short durations when the clock speed is high. If F_CPU is 20 MHz, for example, the maximum you can delay with _delay_ms() is something like 13 ms. The solution is to make your own function that uses these functions:
void delay_ms(unsigned int time_ms)
This delay will work inside an ISR because it doesn’t rely on interrupts for its timing.
Lastly, in general your guiding ISR approach should be: “get in and get out.” It’s usually not a very good practice to spend a lot of time in an ISR, because this interrupt will happen in the middle of your code and could introduce a long delay into a routine that was not expecting it. For example, maybe you’re trying to measure the length of a short pulse when this ISR occurs. If you spend a few milliseconds in your ISR and the pulse is only a few microseconds long, you might miss the pulse entirely, or you might measure its length as a few milliseconds rather than microseconds. If you need your ISR to accomplish complicated/long things, you should, if possible, just have the ISR set up events that can then be carried out in your main loop.
volatile unsigned char event = 0;
event = 1;
if (event == 1)
do the long/complicated thing
event = 0;
In my experience, interrupts are one of the hardest things to use well in embedded programming. Because of their often non-deterministic nature and the fact that they can occur in the middle of an operation that you might think is uninterruptable, they can lead to bugs that are incredibly difficult to track down.