Interrupt internals ... atmega328p

Say I connect wires on pins 2 and 3 between two Arduino Unos, and on one of them execute the following code:

   // assume both were low
   digitalWrite(2,HIGH);
   asm("nop\n nop\n nop\n ... ");  // some very short delay of  N cycles 
   digitalWrite(3,HIGH);

Now assume the other Arduino Uno has enabled its two hardware interrupts on its pins 2 and 3.

Can one expect the ISR's for pins 2 and 3 to detect the sequence in which the pins go high, even if we swap the two digitalWrite()'s in the example? By "detect" I of course refer to call order.

I realize that if the second Uno is busy inside some ISR, or otherwise executing code between cli() and sei(), then both INT0 and INT1 may have their trigger-bits set, and priority then means INT0-ISR gets executed first, regardless. So let's assume that is not the case, and that Uno two is running regular code.

On top of this, there is the issue of how setting a line high, means enabling its pullup-resistor, and that to enable close to one MHz of switching frequency, one may need smaller pullup-resistors, but if one can assume the individual pullups on each pin to be almost identical, this time delay from issuing a HIGH until the signal is received at the other end, may be considered unimportant.

So can one assume that pullups on different pins are mostly identical?

I know this starts to sound very much like i2c, and that may not be a coincidence.

What would be a safe delay between two write operations, theorethically?

Can one expect the ISR's for pins 2 and 3 to detect the sequence in which the pins go high, even if we swap the two digitalWrite()'s in the example? By "detect" I of course refer to call order.

The interrupts will be fired in the order that they are received. Why would you expect anything else ?

UKHeliBob:
The interrupts will be fired in the order that they are received.

And a short program would demonstrate that.

…R

Arduino’s function digitalWrite() is slow in some application because of its overhead, for ‘time critical’ app. is then better to use direct write to port which can guarantee more precise timing.
The sequence order of ISRs should not be a problem to detect as far as there is enough time for processing, but yes
each of ISRs on ATmegas has its flag to register the occurance. In situation when the ISRs are diasabled and let say several other IRSs occur then the ISRs will be executed in their priority order, not chronologically. Also, if one ISR occured several times during ISR disabled, it will be executed just once after enabling. For advanced ISR functionality, like nested ISR etc., you have to look for different MCU type.
EDIT: From the datasheet:

When an interrupt occurs, the Global Interrupt Enable I-bit is cleared and all interrupts are disabled. The user software can write logic one to the I-bit to enable nested interrupts. All enabled interrupts can then interrupt the current interrupt routine. The I-bit is automatically set when a Return from Interrupt instruction – RETI – is executed.

I have to fix myself here. The nested interrupts are allowed on ATmega but fully controlled by user. :-[

So can one assume that pullups on different pins are mostly identical?

Yes. There is some tolerance in the datasheet but on one chip they are almost at same value. It depends on technology of production. According the datasheet the ATmega328P it has between 20-50kOhm and 30-60k on reset pin. Consider to use external pull-up because there are several factors which can affect the signal and internal pull-ups are little bit large.

What would be a safe delay between two write operations, theorethically?

It is not clear for me. If you meant the delay between two ISR’s from example above, then time for ISR executing and it depends on its code (four nops is definitely not enough). Keep on mind that the ISR has to be as short as it is possible, just to set the flag or increment some variable and to execute the rest of code if any in the main program.

UKHeliBob:
The interrupts will be fired in the order that they are received. Why would you expect anything else ?

The question was what is the smallest delay needed. This goes as to how the interrupt bits get set on the receiving end. I take it that's hardware, and that the operation is parallellized, and so one can assume that the time between a line going logical high and the corresponding interrupt bit is set, is independent of which interrupt line it is?

Robin2:
And a short program would demonstrate that.

...R

Yeah, of course you're right, but programs may not easily determine whether so is always the case, assuming as I did above, that the receiving Uno is engaged in running normal code, with interrupts enabled.

:slight_smile:

Budvar10:
It is not clear for me. If you meant the delay between two ISR's from example above, then time for ISR executing and it depends on its code (four nops is definitely not enough). Keep on mind that the ISR has to be as short as it is possible, just to set the flag or increment some variable and to execute the rest of code if any in the main program.

I'm talking about the number of "nops", that is, if we assume the receiving Uno is not already running an ISR or otherwise has interrupts disabled, how short the time between setting the two lines HIGH will be safely detectable at the receiver.

As seen from the receiver: one line goes high, and its corresponding interrupt flag goes high "automagically", that is to say, hardware fixes this.

Let's say this happens at somewhere during a clock cycle that we call M. The processor finishes its work at clock M, and if it's a two-cycle instruction (there are a few, isn't there), then cycle M+1 is also spent processing that instruction.

I assume all interrupt flags are logicall "scanned" either at every clock cycle, or after each instruction?

Thus, the microcontroller will then decide to start the ISR for the pin that went high, say.

However, if the pullup's at the sender differ a bit, then the second line to be written HIGH by the sender, may appear HIGH first at the receiver. That's what will determine the number of nops, I think.

Does this make sense to anybody not me? :slight_smile:

Budvar10:
Arduino’s function digitalWrite() is slow in some application because of its overhead

Yes, but as long as the overhead is constant regardless of which pin, it doesn’t matter. But that’s probably not certain. So ports are best yes.

The chapter 7.7.1 in the datasheet could be interesting reading for you.

Pin change int. takes 4 clock cycles to set the flag. HW cares for interrupt flag setting. Jump to the ISR is handling immediately after current instruction finish if the interrupts are enabled of course. BTW: You could interrest about Atmel Studio. You can go throughout your code per instructions, to count clock cycles and simulate your problem.
Minimum ISR response time is 4 clock cycles and return from ISR takes another 4. Add all instruction cycles inside of ISR to obtain total. It is far more than 4.

how short the time between setting the two lines HIGH will be safely detectable at the receiver.

Many parameters. If the interrupt with higher priority will be first, I think just 1NOP delay is needed to be safely executed in order, but in opposite situation it must be inside of ISR at least ((4+1)NOP+'longest instruction'NOP ??). Not assuming any cli() during code execution? How to predict this?

However, if the pullup's at the sender differ a bit, then the second line to be written HIGH by the sender, may appear HIGH first at the receiver. That's what will determine the number of nops, I think.

You are speculating about the line delay, right? Ok, each line is like RC filter. It could be good to have both as the same and short as possible to avoid such effect, isn't it? Here I have to say, I'm really curious about specific problem what we are solving.

Budvar10:
The chapter 7.7.1 in the datasheet could be interesting reading for you.

Pin change int. takes 4 clock cycles to set the flag. HW cares for interrupt flag setting. Jump to the ISR is handling immediately after current instruction finish if the interrupts are enabled of course. BTW: You could interrest about Atmel Studio. You can go throughout your code per instructions, to count clock cycles and simulate your problem.
Minimum ISR response time is 4 clock cycles and return from ISR takes another 4. Add all instruction cycles inside of ISR to obtain total. It is far more than 4.
Many parameters. If the interrupt with higher priority will be first, I think just 1NOP delay is needed to be safely executed in order, but in opposite situation it must be inside of ISR at least ((4+1)NOP+'longest instruction'NOP ??). Not assuming any cli() during code execution? How to predict this?
You are speculating about the line delay, right? Ok, each line is like RC filter. It could be good to have both as the same and short as possible to avoid such effect, isn't it? Here I have to say, I'm really curious about specific problem what we are solving.

I read also that it takes 4 cycles to do context switch from regular code to ISR, and 4 cycles back. As long as it's constant across all interrupts, it doesn't matter.

The issue at hand is that I'm working on creating a debug interface towards a status Monitor device, which is an atmega328p, a 20x4 LCD screen and some LED's, plus some buttons for managing presentation, such as defining which "monitor channels" are connected to the LED's etc.

I have defined a software protocol over non-hardware-supported lines. Data are signalled like this:

0-bit
Pin A:  ---------------
Pin B:

1-bit
Pin A:  --------------
Pin B:      -------

Sync (end-of-one-value-ready-for-next)
Pin A:  ------------
Pin B :      ------------

Data values are up to 16 bits, using the upper byte for control purposes. Leading MSB zeroes are skipped, so the value 1 is just one bit over the line.

The idea is that the overhead at the sender should be as small as possible, and that the code should remain a part of the application forever, so it can always be monitored in the future.

I am creating a library that runs on any two pins on any application, plus the status Monitor device, with two wires + GND to be connected to the application undergoing status monitoring.

Also, the concept of a separate protocol over separate lines, may seem strange, but it lets me in the future make it timer-driven, without corrupting on-going communication on i2c and SPI, which were my initial candidates.

The two wires are interchangeable, but that's not a cruical feature, so it may get dumped, in exchange for speed.

I wrote a first interrupt-driven version of the Monitor (receiver of data) yesterday, and after some debugging (with Serial.println() ... ugh) it now receives data.

My current code for writing 1-bits looks like this:

digitalWrite(pinA,HIGH);
delayMicroseconds(XXX);   // T1
digitalWrite(pinB,HIGH);
delayMicroseconds(XXX);   // T2
digitalWrite(pinB, LOW);
delayMicroseconds(XXX);   // T3
digitalWrite(pinA, LOW);
delayMicroseconds(XXX);   // T4

Currenty the XXX value is set to 25, and it works okay, but I think I can use different timing at different stages, in particular shaving down T1 to a few "nops". This as I will have complete control of the receiving end, which means I can know that it is doing nothing except waiting for data, depending on the sender library to tell it when it can take time off to update the screen etc.

This is work in progress, and I'm having a blast! :slight_smile:

--

And before anybody starts telling me about all the standard tools that I should look into, let me add that following rules and regulations is not why I program microcontrollers.

Of course I use libraries, but at heart I want to develop my software tools (and now also hardware) bottom-up. As a professional programmer I get more than enough of "best practices" etc, at work every day!

:slight_smile:

Programming has been my hobby and work for 30+ years, and I'm not done yet!!

Can one expect the ISR's for pins 2 and 3 to detect the sequence in which the pins go high, even if we swap the two digitalWrite()'s in the example? By "detect" I of course refer to call order.

Assuming no other ISRs are running (an assumption I would not normally make) then the moment you trigger an interrupt it would be processed in the next clock cycle, so no NOPs should be necessary.

On top of this, there is the issue of how setting a line high, means enabling its pullup-resistor, ...

Not at all, that is an entirely different thing, unless both are set to INPUT mode, which you did not say.

Say I connect wires on pins 2 and 3 between two Arduino Unos

Which I wouldn't do unless they were both inputs.


Let's assume they are both inputs, otherwise you are damaging your output ports. The input pull-ups are around 50k so there would be a finite time for them to become high. However the one you set high first would probably trigger the interrupt first, depending on the external circuitry.

So can one assume that pullups on different pins are mostly identical?

Maybe. The datasheet doesn't say so.

What are you really trying to do here?

http://www.gammon.com.au/interrupts

Currenty the XXX value is set to 25, and it works okay, but I think I can use different timing at different stages, in particular shaving down T1 to a few "nops". This as I will have complete control of the receiving end, which means I can know that it is doing nothing except waiting for data, depending on the sender library to tell it when it can take time off to update the screen etc.

Transmitting side is clear for me now. As is written above get rid of digitalWrite() and write yours instead. Look inside to the function (wiring_digital.c). You need not to detect specific pin at each function call but just setting the bit. http://tronixstuff.com/2011/10/22/tutorial-arduino-port-manipulation/ At this place the library for dallas 1-wire would be good inspiration PaulStoffregen/OneWire how to do direct write and much faster, also for read.

Receiving side. Do you have real experience with the pulse overtaking or it is just program weakness? I think the second one or it is wiring setup problem. As I see your protocol definition I have several solutions on mind.
You can enable interrupts inside of ISR to detect next interrupt for the fastest results (maybe? - here is a room for experiments). ISR A should be defined for rising and falling edge... Provide your program for discussion.

I do not understand the problem.

Even if the interrups for both pins get set a the same time, both will be honored, the lower interrupt first.

On top of this, there is the issue of how setting a line high, means enabling its pullup-resistor, and that to enable close to one MHz of switching frequency, one may need smaller pullup-resistors, but if one can assume the individual pullups on each pin to be almost identical, this time delay from issuing a HIGH until the signal is received at the other end, may be considered unimportant.

The value of the pullup resistor won't matter ... the time it takes to get in and out of an ISR will determine the maximum switching frequency. I wouldn't expect much more than 100kHz ISR frequency for a tight ISR routine. There will be some jitter in the timing, perhaps ±2 cycles depending upon what command the CPU is at when interrupted. Other interrupts such as the millis timer cause more severe jitter unless disabled.

However, stronger pullups will improve input noise susceptibility.

Both lines are output on Uno one and input on Uno two. Can this damage the ports??

Budvar10:
Receiving side. Do you have real experience with the pulse overtaking or it is just program weakness? I think the second one or it is wiring setup problem. As I see your protocol definition I have several solutions on mind.
You can enable interrupts inside of ISR to detect next interrupt for the fastest results (maybe? - here is a room for experiments). ISR A should be defined for rising and falling edge... Provide your program for discussion.

Letting ISR's modify interrupt conditions is a good idea. Thanks! :slight_smile:

Whandall:
I do not understand the problem.

Even if the interrups for both pins get set a the same time, both will be honored, the lower interrupt first.

Yes of course. The issue was how long sender has to delay between first setting line A high, and then setting line B high, for it to safely trigger ISR's at the receiver in that order.

Assuming that the receiver is not busy executing code with interrupts disabled, for example some ISR. I was trying to get more information as to how interrupts are processed at the receiving end.

Actual delays are not so important, as that they are the same for pins 2 and 3, which I will be using, as they support hardware interrupts on my uc.

Rupert909:
The issue was how long sender has to delay between first setting line A high, and then setting line B high, for it to safely trigger ISR's at the receiver in that order.

The answer is 0 time units of you choice.

dlloyd:
Not at all, that is an entirely different thing, unless both are set to INPUT mode, which you did not say.

Okay, I may have misunderstood there. You're saying that writing HIGH to an output pin is NOT the same as enabling its pullup? I always thought so.

However, doing a little math here, with Ohms law, I see that if that were the case, then the port could deliver at most 0.25 mA, and not 20 mA as is the case.

Good one, thanks! :slight_smile:

This must mean that the pin, when written HIGH in output mode, will drive the line HIGH much faster than if it was the 20-30K pullup that should do the job.

As to what I'm trying to do, I outlined that in some detail in the long post to this thread, some hours ago:

I'm building a Monitor device that will display state information from inside a running program, that can in the future even be controlled by a timer, and thus using non-hardware-supported lines, so as not to introduce bugs into ongoing i2c and SPI data exchanges.

In one long sentence. :slight_smile:

Whandall:
The answer is 0 time units of you choice.

Yes that sound right, but assumes a couple of things:

  • any two output lines, on Uno one, when written HIGH, actually go HIGH with same time delay
  • interrupts on lines 2 and 3 on the other Uno, trigger ISR's for pins 2 and 3 in correct order

:slight_smile:

Further assumptions is that the receiving Uno does not currently have interrupts disabled, but I think I can guarantee that, because this is not a general purpose protocol, but one used to move data to exactly one target system, which I write the code for.

:slight_smile: