debounce problem

Nantonos:

TCSC47:
Is there any reason why the standard two Nand gate flip flop with a change over switch can not be used for switch debounce? This is the circuit I have almost always used for such applications.

That type of circuit was discussed in the page I linked to earlier in the thread, which also mentioned the drawback that a double-throw switch is needed. But that aside, it works fine of course.

My favorite from days past was to wire the grounded common contact of a SPDT switch to the direct set and reset pins of a common 74LS74 flip-flop for perfect hardware debouncing. I would sometimes even add a simple RC to one of those two pins to ensure the flip-flop would power up to the state I wished it to be.

Lefty

tuxduino:
I thought about that. But then you'd have nested interrupts

No you don't - and you can't really tell what the better option is (hardware or software) unless you explore all options, can you?

Debouncing can be regarded as a simple case of low pass filtering and this is (should be) second nature for a software engineer to implement (10 lines of code). With or without interrupts, this can be applied effectively and reliably in either case. Put in the effort to learn how and then decide what the best option is for your project.

No you don't

The scenario is this: the signal one is trying to debounce fires the interrupt - let's call this the "button interrupt". The button interrupt gets disabled, but other interrupts do not, so one can delay() inside the ISR to do sw debouncing. So while the "button ISR" is running, timer0 fires. And its ISR gets executed.
That's a nested interrupt scenario, isn't it ?
(btw, I'm not the OP).

if interrupts are disabled hence millis() can't advance ?

That would be the case if millis() is poorly coded. Interrupts get disabled all the times. millis() presumably is programmed on timer interrupts. So even if the interrupt is disabled, the flag is still there and the timer keeps going, not missing a beat.

However, if one of the isrs takes too long to execute (during which the timer's flag is set multiple times), millies() will lose count.

You can test this by running a loop inside an isr for an extended period (longer than millis()) and then read millis() to see if it is missing the beat.

tuxduino:
That's a nested interrupt scenario, isn't it ?

You make this a lot more complicated than it really is. Your comments are comparable to someone hinting about using a voltage divider and having to explain the concept of a resistor, the fact that we need two of them, the use of power rails and how it all comes together. And surely, if none of these concepts are known, it is complicated. When working with electronics we need basic skills and the same applies to coding.

When we push a mechanical button, this typically results in a burst of pin state changes until the contacts make firm connection. The basic requirement of debouncing is to record this as a single event. Without some form of debouncing, we would otherwise record multiple key push events. We can avoid this with a single order external low pass filter (a RC circuit) or we can handle it in software (because we’re using a microcontroller) without additional components.

The software approach simply requires that we record the time of the first pin state change (someone pushed the button), and ignore additional pin state changes until a time period has elapsed. That’s all there is. How we detect the pin state change (interrupt, polled or otherwise) is irrelevant in this context.

Then to basic coding skills - we do not call the delay() function in ISR’s (in fact we have no use for a delay function at all in well written code). In ISR’s, we simply record the event (write to a global variable) and leave actual processing of the event for the loop() function. Rather than using delay(), we use the recorded time of the event and calculate the difference between current time and event time every time through our loop() function. In the first few milliseconds after the event, we simply ignore additional pin state changes (we already acted on the first state change). Once the debounce period has expired (say 5ms or so) we’re back to where we started and ready to act on new button push events.

We could also contain the debounce logic (no delay) within the ISR itself based on time between successive change events and so only report debounced events to the outside world (the loop function). This is just a matter of style.

Another approach again is to disable pin state interrupts within the ISR itself and then re-enable in the loop function once the debounce period has expired.

BenF, thanks for your thorough explanation.

It all started with this comment:

Grumpy_Mike:

Have you considered adding a software debounce ?

It is a lot more tricky on an interrupt pin, because the delay is in the ISR which is never a good thing.

I'd never call a delay() inside an ISR, but that comment got me thinking about what would happen if I did.

dhenry:

if interrupts are disabled hence millis() can't advance ?

That would be the case if millis() is poorly coded. Interrupts get disabled all the times. millis() presumably is programmed on timer interrupts. So even if the interrupt is disabled, the flag is still there and the timer keeps going, not missing a beat.

However, if one of the isrs takes too long to execute (during which the timer's flag is set multiple times), millies() will lose count.

You can test this by running a loop inside an isr for an extended period (longer than millis()) and then read millis() to see if it is missing the beat.

I was still referring to the scenario where delay() is called inside an ISR. Millis does depend on timer0 interrupt (relevant code is in hardware/arduino/cores/arduino/wiring.c).

You make this a lot more complicated than it really is...

You are not answering his question.

I was still referring to the scenario where delay() is called inside an ISR.

Yes, it can miss a beat if one of the isrs is too long (1024us*2 or longer). See timer0 overflow isr was just called and m is updated, and the timer0 overflow flag is cleared. You get into one of your overly long isr (that takes 2.2ms to execute). Timer0 continues to roll and 1ms into the execution, timer0 overflows and the flag is set by hardware but that interrupt is masked off as you are still inside this long isr. The 2nd time timer0 overflows and the flag is set at 2ms mark. 0.2ms later, you exit this long isr and execution goes right back to your timer0 overflow isr and m is updated, but in this case, only by 1 -> you miss a ms.

The opposite is true: if you turn on global interrupt during the execution of the long isr, in the middle of it, the execution will jump back to timer0 overflow isr -> you get nested isr.

It is doable. But without considerable skills, it will for sure kill most programs.

Thanks dhenry.

My original doubt was whether delay() inside an ISR would freeze the program due to timer0 interrupt being disabled (if one doesn't use the "selective interrupt disable" option just discussed, which as you confirmed would lead to nested interrupts).

It's about time I write some code on my own I guess :stuck_out_tongue:

(if one doesn't use the "selective interrupt disable" option just discussed, which as you confirmed would lead to nested interrupts).

We may have mis-understood each other.

During normal isr execution, peripheral interrupts are always on - they are never disabled in the first place. So if an adc interrupt arrives, or a spi interrupt arrives, the flags are going to be set, as they are usually.

The difference here is that the global interrupt is disabled during isr. So those interrupts, other than the one currently being serviced, will not be serviced, regardless of their priorities, until the current isr has finished execution.

From within the current isr, you don't need to worry about other interrupt requests (they are not disabled), because the global interrupt is disabled.

So I don't quite understand the point of "selective disable". As soon as global interrupt is enabled inside of an isr, you run the risk of nested isr. Its programming isn't for the faint of heart.

I used the wrong word, sorry.

Please give me a second chance to clarify what I was trying to say :slight_smile:

Pin signal triggers an interrupt => we are inside the "pin ISR" and we call delay() to debounce (wrong way to do it, as already discussed, but this is not the point now). Timer0 interrupt fires at some point, but it doesn't get serviced because of global interrupt disable.
If the ISR just took too much time to execute, at some point it would terminate and the Timer0 interrupt would be eventually serviced, perhaps just a bit late.
If we call delay(), though, we are waiting for the time counter to advance. But that counter is advanced by timer0 ISR, which as we saw already is not serviced. So we wait forever..., the "pin ISR" never exits and everything grinds to a halt.

Now to see whether my reasoning is correct I should (re)read wiring.c very carefully and try some code :slight_smile:

But that counter is advanced by timer0 ISR

That is not correct. The counter (TMR0) is advanced by hardware.

Based on wiring.c, delay utilizes micros(), which tests TMR0 interrupt flag. So from that perspective, using delay() within isr is OK.

micros() interestingly disables global interrupt upon entry but never re-enable it upon exit.

Nantonos:

TCSC47:
Is there any reason why the standard two Nand gate flip flop with a change over switch can not be used for switch debounce? This is the circuit I have almost always used for such applications.

That type of circuit was discussed in the page I linked to earlier in the thread, which also mentioned the drawback that a double-throw switch is needed. But that aside, it works fine of course.

Apologies. I missed that.

dhenry:

But that counter is advanced by timer0 ISR

That is not correct. The counter (TMR0) is advanced by hardware.

Based on wiring.c, delay utilizes micros(), which tests TMR0 interrupt flag. So from that perspective, using delay() within isr is OK.

As I said, I needed to check wiring.c more carefully :stuck_out_tongue: Thanks for pointing this out.

dhenry:
micros() interestingly disables global interrupt upon entry but never re-enable it upon exit.

SREG is saved before disabling interrupts. It is then restored to its saved value upon exit. The global interrupt enable flag thus returns to its "enabled" state.

This is actually quite interesting.

delay() relies on micros() to work: it calculates the elapsed time to make sure that sufficient amount of timer ticks has passed.

micros() relies on timer0 overflow flag to work: it tests the timer0 flag and if it is set, it thinks a ms has passed and calculates micros() accordingly. For this approach to work, time0 flag cannot be set by the timer0 isr so it has to disable interrupts, knowing that once you return from micros(), the flag, if set, will be cleared by the timer0 isr.

The side effect of this is that in an isr environment, once you exit from micros(), you are still in the isr and the timer0 flag is never cleared. So once the timer0 flag is set, each time micros() is called, it thinks a ms has passed.

What you observe is that delays() in isr goes very fast: it got the first <256 ticks right but after that, "time" flies literally.

They could have programmed it different to avoid this problem. For example, they could have used a static variable to record the TCNT0 last time micros() is called and compare it vs. the current reading of TCNT0 to decide if the timer has overflown. This approach does not rely on the timer0 flag to be set / cleared. But it requires that micros is called frequently.

Hi Henry
I'm still getting used to this forum and I haven't been able to quote one of your comments.

I said --- "the inputs for the atmega device have hysterisis, then the sensor switches can be connected directly to the device, eliminating the 74HC14 completely, reducing the component count to a minimum."
You said -- "I don't know how the hysterisis (either on the atmega or hc14) would have eliminated the need for debouncing. Your circuit would have worked with a non-ST gate and the atmega, with hysterisis, would malfunction without a debouncing approach."

The schmitt input would of course need an RC circuit to hardware debounce. However, the RC circuit by itself would not provide a reliable debounce action without the schmitt trigger.

I must add however, that I have come to the conclusion from all the comments here, (but accepting that I do not have full access to Ironbot's design) that the best way for Ironbot to go, is to software debounce. If this is a learning exercise, it would be a valuable bit of experience anyway.

You have started an interesting string Ironbot.

You have started an interesting string Ironbot.

Definitely :slight_smile:

Please let me remind once more this must-read page:

tuxduino:

You have started an interesting string Ironbot.

Definitely :slight_smile:

"ironbot" got started when I bought Arduino on 2009, Arduino started all!

Many thanks to whole the community and forum with great respect!