Digital output to trigger interrupt

On Arduino Mega 2560 r3,if an external interrupt pin (20) is set as a digital output and the interrupt enabled and connected to ISR using AttachInterrupt with mode RISING, will setting pin 20 as HIGH trigger the interrupt?

I would like to have the code trigger an interrupt.

I haven't tried that, but it doesn't seem likely. You have assigned the pin as an output, and then you want it to serve as an input.

On the contrary - outputs are always inputs, and even if they were not - that is to say, the "digitalRead" read the output register instead of the actual pin, which does happen in some microcontrollers - writing to an output would still trip whatever is connected to the input.

Now, you may not realise this, but on the AVRs (and most other devices), there are no actual bit manipulation operations. Whether done in code or in hardware, what are performed are bit mask operations whereby the actual register byte is read, the mask (OR or AND or possibly XOR) performed and the result written back to the register. This may mean that if a particular bit is (possibly transiently) loaded to a state contrary to what the output register defines, then the bit will be changed by any operation purportedly affecting any other bit in the same byte-wide port.

And of course, it should go without saying, that when a port (bit) is switched from input to output, it immediately takes the state to which the output register was written even though the port was an input when it was written.

This is particularly important to remember after reset (which sets all output bits LOW) as fooled the fellow in this recent thread. :grinning:

Yes, you should be able to generate irqs by writing to the pin.

See datasheet page 109.

The External Interrupts are triggered by the INT7:0 pin or any of the PCINT23:0 pins.
Observe that, if enabled, the interrupts will trigger even if the INT7:0 or PCINT23:0 pins
are configured as outputs. This feature provides a way of generating a software interrupt.

Paul__B: Now, you may not realise this, but on the AVRs (and most other devices), there are no actual bit manipulation operations. Whether done in code or in hardware, what are performed are bit mask operations whereby the actual register byte is read, the mask (OR or AND or possibly XOR) performed and the result written back to the register.

What would you call the AVR atomic SBI (set bit) and CBI (clear bit) instructions? Those instructions are modifying a single bit in an i/o register and I'd bet that it is really done at the bit level in h/w as well vs any sort of read/mask/update operation on the full register.

Other processors like the PIC32 and several peripheral device chips do support bit manipulation operations. They have special bit set and bit clear registers that you can write to to atomically set or clear bits within the i/o register. It is unlikely that those special registers do read/mask/update operations on the full i/o register as that would take more logic and gates inside the chip to implement it that way vs just controlling the individual bit within the i/o register which is easy to do at the h/w gate level.

--- bill

bperrybap: What would you call the AVR atomic SBI (set bit) and CBI (clear bit) instructions?

Ah, there they are!

Couldn't spot them when I was looking (in a hurry) this morning.

bperrybap: Those instructions are modifying a single bit in an i/o register and I'd bet that it is really done at the bit level in h/w as well vs any sort of read/mask/update operation on the full register.

Actually, I'm betting the exact opposite, as it would take much more hardware to individually control bits in the I/O registers. This would require that every one of those WRx, RRx strobes on each bit would have to be separately decoded to allow alternatively for bit-wide and byre-wide operations and that in addition to gating the AND/ OR logic (0xFF or 0x00) to the data bus (page 76). And surely you are not anticipating gating individual bits to HIGH or LOW separately to the data bus?

If you do the operations as masks, you still only have to have one AND/ OR ALU element applied to the data bus - presumably common to the general arithmetic section (because it already has this logic) and the same I/O strobes as all other port operations.

Note particularly that SBI and CBI take two clock cycles while IN and OUT take 1. Why do you suppose that is?

I note the interesting matter of the "synchronizer" and even more interestingly, that there is a built-in hardware port XOR function in the behaviour of the PINx registers.

Paul__B: Actually, I'm betting the exact opposite, as it would take much more hardware to individually control bits in the I/O registers. This would require that every one of those WRx, RRx strobes on each bit would have to be separately decoded to allow alternatively for bit-wide and byre-wide operations and that in addition to gating the AND/ OR logic (0xFF or 0x00) to the data bus (page 76). And surely you are not anticipating gating individual bits to HIGH or LOW separately to the data bus?

If you do the operations as masks, you still only have to have one AND/ OR ALU element applied to the data bus - presumably common to the general arithmetic section (because it already has this logic) and the same I/O strobes as all other port operations.

I've designed and implemented custom ASICs before, including ones that had ARM cores in them. And in those, yes we had atomic bit operations on certain registers. When dealing with things at the gate level, stuff like direct register bit access is quite easy. Bit twiddling in those special registers was not i/o bus operations; it was direct access to the latch gates for the bits in the particular register. In our ASIC, going with direct bit access was not only faster (fewer clocks) but used less silicon than doing an atomic i/o update across the internal bus. Some of that relates to pipelining and cache flushing which can get really complicated or expensive in terms of clock cycles when trying to maintain atomicity. There were options and each required trade offs. The s/w could assist with the i/o operation by using the existing i/o bus and controlling the pipe-lining/cache in s/w, which slowed things down or it could be done in h/w to speed things up. When doing it in h/w, doing it using bus operations was more expensive in terms of gates and clock cycles than directly modifying the register latch bits through special register accesses. We needed the speed, so we added h/w and used special registers as that was faster and used few gates than using the processor i/o bus. I have no idea how the AVR guys implemented their silicon but in our chips we had atomic bit capabilities in certain registers and it was faster and used few gates than implementing it using atomic bus i/o updates.

Note particularly that SBI and CBI take two clock cycles while IN and OUT take 1. Why do you suppose that is?

Not sure. It depends on their internal implementation. It may not be due to a full bus i/o operation like read/mask/write going on. It is possible that the SBI and CBI instructions need an extra clock for instruction decode timing and clock synchronization to ensure that the bit in the instruction is fully decoded and the i/o register is fully updated and stable for future bus i/o operations. But notice that it is 1 clock on some of the other AVR chips so perhaps they optimized their h/w design on chips like the XMEGA.

bperrybap: I've designed and implemented custom ASICs before, including ones that had ARM cores in them.

So maybe you are interested in the following upcoming product (if you don't know it yet)

http://www.fleasystems.com/forums/showthread.php?tid=60&pid=245#pid245