Interrupts and loop


I have the following code:

volatile int flag_sf = 0;

int old_flag=0;

int pin = 7;
volatile int state = LOW;

void setup() {

  Serial.println("BEGIN NOW");
  attachInterrupt(digitalPinToInterrupt(pin), interrupt, RISING);

void loop() {

    flag_sf = 0;

void interrupt(){
  flag_sf = 1;

void start(){

The problem is that upon the reception of the interrupt, it calls start() as it has to do but after the execution of this function nothing else happens again. Why is it not in the loop again? How can I achieve that?

This code is actually working; however I forgot the following lines:

digitalWrite(7, LOW)

I'm very confused by this and by the example on the attachInterrupt() reference page.

If you are attaching an interrupt to a pin, you would want that pin to be in input, so you can interrupt on the chosen transition edge. Yet your code, and the example, are setting the pin to be an output? It makes absolutely no sense to me.

The example on the reference page is even worse: it uses pin 13 an an LED drive output, and uses THE SAME PIN as the interrupt input? And then it goes so far as to set the pin up for changes. So the only thing I can think of is that they are setting it up with a feedback loop and resulting in a sort of oscillator: when the LED state changes, it will trigger an interrupt, which will change the state of the LED and trigger another interrupt, which will change the state of the LED and so on... I think it's a very poor and confusing example.

Nick Gammon has a blog post on interrupts, and I think that one makes more sense. In his example, he has one pin for an LED, and another pin for a button. He sets up an interrupt on the button, so that any change in the button state gives a corresponding change to the LED. The example is fairly clear, but it uses an unfortunate shortcut in setting up the interrupt pin: he doesn't make a pinMode() call on the pin, but he does do a digitalWrite(HIGH). This relies on the fact that the pins are inputs by default, and if the output register for a pin is set high, while the pin is set as an input, what it actually does is enable a pullup resistor on the input. I think it would be better if he explicitly called pinMode() on the interrupt pin to set it to INPUT_PULLUP.

So, getting back to your code, if you are setting the pin as an output, and setting it low, how are you feeding it a signal that triggers an interrupt? You must be feeding in enough power to overwhelm the output drive that's trying to hold the pin low, and that's enough to generate the interrupt? If so, I don't think that's a very good design, and it could damage the output pin drive circuitry over time.

I guess I'm shocked (and concerned) that setting the interrupt pin to an output, and forcing it low allows the code to run. I think you just got lucky, and didn't really fix the problem. (And you possibly created a worse problem?)