if interrupts are disabled hence millis() can't advance ?
That would be the case if millis() is poorly coded. Interrupts get disabled all the times. millis() presumably is programmed on timer interrupts. So even if the interrupt is disabled, the flag is still there and the timer keeps going, not missing a beat.
However, if one of the isrs takes too long to execute (during which the timer's flag is set multiple times), millies() will lose count.
You can test this by running a loop inside an isr for an extended period (longer than millis()) and then read millis() to see if it is missing the beat.
I was still referring to the scenario where delay() is called inside an ISR. Millis does depend on timer0 interrupt (relevant code is in hardware/arduino/cores/arduino/wiring.c).
You make this a lot more complicated than it really is...
You are not answering his question.
I was still referring to the scenario where delay() is called inside an ISR.
Yes, it can miss a beat if one of the isrs is too long (1024us*2 or longer). See timer0 overflow isr was just called and m is updated, and the timer0 overflow flag is cleared. You get into one of your overly long isr (that takes 2.2ms to execute). Timer0 continues to roll and 1ms into the execution, timer0 overflows and the flag is set by hardware but that interrupt is masked off as you are still inside this long isr. The 2nd time timer0 overflows and the flag is set at 2ms mark. 0.2ms later, you exit this long isr and execution goes right back to your timer0 overflow isr and m is updated, but in this case, only by 1 -> you miss a ms.
The opposite is true: if you turn on global interrupt during the execution of the long isr, in the middle of it, the execution will jump back to timer0 overflow isr -> you get nested isr.
It is doable. But without considerable skills, it will for sure kill most programs.
My original doubt was whether delay() inside an ISR would freeze the program due to timer0 interrupt being disabled (if one doesn't use the "selective interrupt disable" option just discussed, which as you confirmed would lead to nested interrupts).
It's about time I write some code on my own I guess
(if one doesn't use the "selective interrupt disable" option just discussed, which as you confirmed would lead to nested interrupts).
We may have mis-understood each other.
During normal isr execution, peripheral interrupts are always on - they are never disabled in the first place. So if an adc interrupt arrives, or a spi interrupt arrives, the flags are going to be set, as they are usually.
The difference here is that the global interrupt is disabled during isr. So those interrupts, other than the one currently being serviced, will not be serviced, regardless of their priorities, until the current isr has finished execution.
From within the current isr, you don't need to worry about other interrupt requests (they are not disabled), because the global interrupt is disabled.
So I don't quite understand the point of "selective disable". As soon as global interrupt is enabled inside of an isr, you run the risk of nested isr. Its programming isn't for the faint of heart.
Please give me a second chance to clarify what I was trying to say
Pin signal triggers an interrupt => we are inside the "pin ISR" and we call delay() to debounce (wrong way to do it, as already discussed, but this is not the point now). Timer0 interrupt fires at some point, but it doesn't get serviced because of global interrupt disable.
If the ISR just took too much time to execute, at some point it would terminate and the Timer0 interrupt would be eventually serviced, perhaps just a bit late.
If we call delay(), though, we are waiting for the time counter to advance. But that counter is advanced by timer0 ISR, which as we saw already is not serviced. So we wait forever..., the "pin ISR" never exits and everything grinds to a halt.
Now to see whether my reasoning is correct I should (re)read wiring.c very carefully and try some code
TCSC47:
Is there any reason why the standard two Nand gate flip flop with a change over switch can not be used for switch debounce? This is the circuit I have almost always used for such applications.
That type of circuit was discussed in the page I linked to earlier in the thread, which also mentioned the drawback that a double-throw switch is needed. But that aside, it works fine of course.
That is not correct. The counter (TMR0) is advanced by hardware.
Based on wiring.c, delay utilizes micros(), which tests TMR0 interrupt flag. So from that perspective, using delay() within isr is OK.
As I said, I needed to check wiring.c more carefully Thanks for pointing this out.
dhenry:
micros() interestingly disables global interrupt upon entry but never re-enable it upon exit.
SREG is saved before disabling interrupts. It is then restored to its saved value upon exit. The global interrupt enable flag thus returns to its "enabled" state.
delay() relies on micros() to work: it calculates the elapsed time to make sure that sufficient amount of timer ticks has passed.
micros() relies on timer0 overflow flag to work: it tests the timer0 flag and if it is set, it thinks a ms has passed and calculates micros() accordingly. For this approach to work, time0 flag cannot be set by the timer0 isr so it has to disable interrupts, knowing that once you return from micros(), the flag, if set, will be cleared by the timer0 isr.
The side effect of this is that in an isr environment, once you exit from micros(), you are still in the isr and the timer0 flag is never cleared. So once the timer0 flag is set, each time micros() is called, it thinks a ms has passed.
What you observe is that delays() in isr goes very fast: it got the first <256 ticks right but after that, "time" flies literally.
They could have programmed it different to avoid this problem. For example, they could have used a static variable to record the TCNT0 last time micros() is called and compare it vs. the current reading of TCNT0 to decide if the timer has overflown. This approach does not rely on the timer0 flag to be set / cleared. But it requires that micros is called frequently.
Hi Henry
I'm still getting used to this forum and I haven't been able to quote one of your comments.
I said --- "the inputs for the atmega device have hysterisis, then the sensor switches can be connected directly to the device, eliminating the 74HC14 completely, reducing the component count to a minimum." You said -- "I don't know how the hysterisis (either on the atmega or hc14) would have eliminated the need for debouncing. Your circuit would have worked with a non-ST gate and the atmega, with hysterisis, would malfunction without a debouncing approach."
The schmitt input would of course need an RC circuit to hardware debounce. However, the RC circuit by itself would not provide a reliable debounce action without the schmitt trigger.
I must add however, that I have come to the conclusion from all the comments here, (but accepting that I do not have full access to Ironbot's design) that the best way for Ironbot to go, is to software debounce. If this is a learning exercise, it would be a valuable bit of experience anyway.