Communication and interrupts

I want to start a project that will read multiple sensors from i2C, Serial etc and log them to SD via SPI, while handling user input and output via RGB LCD with buttons (uses i2C port expander). I want to get some direction on how to handle the interrupt caused by user input and SPI/i2C communication simultaneously.

An ISR is necessary to set a flag when a button is pushed that will read the buttons pushed and update the LCD screen, but because there is so much communication it will take a long time until it reaches the button/LCD code, so setting a flag may not suffice.

I can guard the i2C and Serial with interrupt guards, however it is generally advised not to write to serial or LCD during an interrupt. But if I understand this correctly an interrupt will trigger even when the interrupts are disabled during i2C or Serial, and then be resolved when interrupts are enabled again. Meaning a "flag" will sort of be set internally anyway and the LCD and button-state will be updated between interrupt guards? (not demonstrated)

Some psuedo code:

//i2C and Serial sensors read about once every 2 seconds
int read_interval=2000;
unsigned long i2cTimer[3];
unsigned long serialTimer[3];
i2cTimer[0]=millis();
i2cTimer[1]=millis();
i2cTimer[2]=millis();

serialTimer[0]=millis();
serialTimer[1]=millis();
serialTimer[2]=millis();

Setup(){
//setup lcd serial blah blah
}

loop(){

cli();
if (millis()-i2cTimer[0]>read_interval){
Read_i2C1();
i2cTimer[0]=millis();
}

if (millis()-i2cTimer[1]>read_interval){
Read_i2C2();
i2cTimer[1]=millis();
}

if (millis()-i2cTimer[2]>read_interval){
Read_i2C3();
i2cTimer[2]=millis();
}
sei(); //interrupt vector resumes

//do stuff with i2C data

cli();
if (millis()-serialTimer[0]>read_interval){
Serial1.Read();
serialTimer[0]=millis();
}

if (millis()-serialTimer[1]>read_interval){
Serial2.Read();
serialTimer[1]=millis();
}

if (millis()-serialTimer[0]>read_interval){
Serial3.Read();
serialTimer[2]=millis();
}
sei();//interrupt vector resumes

//do stuff with Serial data

cli();
SD_write(); // write a csv line to SD card once per minute
sei(); //interrupt vector resumes

//Do labourous things regarding user input like resolving button state
Write_LCD(); //update the LCD screen
Set flag=0;
}

Button_ISR(){

     Read_buttonstate(); //save the button state, need to read from i2C port expander chip
     Set flag=1; //set the flag
     //do stuff with global/volatile variables
}

I am looking for advice regarding whether this is the correct structure for handling interrupts and lots of communication simultaneously. Because the SD takes about 300ms just to write to it for example even if its only once per minute, maybe the button and LCD handling should be done inside the ISR? Better to delay the reading/writing of sensors and update the user?? Serial has buffers, but what about I2C? etc..

You don't need to use an ISR for humans. They never notice if you're a little late.

Simple way to do it is to poll all you devices, saving off what data you can get, (Not using delay()) Then go have a look and see what your human is up to. If the human hit a button, it'll still be hit when you take a look 'cause as far as your concerned, they'll hold it down for about 10,000 years.

-jim lee

jimLee:
You don't need to use an ISR for humans. They never notive if you're a little late.

Simple way to do it is to poll all you devices, saving off what data you can get, (Not using delay()) Then go have a look and see what your human is up to. If the human hit a button, it'll still be hit when you take a look 'cause as far as your concerned, they'll hold it down for about 10,000 years.

-jim lee

hahaha you are damn right and i like the part " they'll hold it down for about 10,000 years "

actually i never bother with interrupt pins, and i yet use delay() where necessary ( like sending SMS in gsm based projects ) but guess what i just do

modify delay() source code to my own delay_mod() and inject check_button_press() sub routine in it, works pretty nice so far, i know how i take care of 'de bounce'

jimLee:
You don't need to use an ISR for humans. They never notice if you're a little late.

The trick with a human user interface is to give them feedback so they know their button-press has been detected - for example flash an LED.

...R

jimLee:
You don't need to use an ISR for humans. They never notice if you're a little late.

Simple way to do it is to poll all you devices, saving off what data you can get, (Not using delay()) Then go have a look and see what your human is up to. If the human hit a button, it'll still be hit when you take a look 'cause as far as your concerned, they'll hold it down for about 10,000 years.

-jim lee

Yes I am not using delay, and I am currently using polling. Polling sucks. It's ok if your only concerned with user input and maybe one or two sensors. But as the amount of sensors piles up it adds considerable time, enough to miss a button push or two for sure which I have noticed that's why I need an ISR.

Maybe you're right about not needing an ISR to write to the LCD and execute the button code and just set a flag. Anyway my question wasn't about polling, it was about which types of communication are in danger when interrupts are used with multiple i2C/Serial communication. I have read https://gammon.com.au/interrupts and Gammon Forum : Electronics : Microprocessors : SPI - Serial Peripheral Interface - for Arduino. The 'critical section' was useful but I believe it doesn't address this issue directly

syphex:
But as the amount of sensors piles up it adds considerable time, enough to miss a button push or two for sure which I have noticed that's why I need an ISR.

Then those sensors some how halt the code. Or they are special / heavy math sensors or just plain stupid use of them :slight_smile: Without knowing what you use I would bed my money on the second statement :slight_smile:

Keep in mind, an interrupt should NEVER take long. That way it ensures it does not mess with outer things unless they are utterly time sensitive.