Looking for a bit of help undertanding the interaction of interrupts. My project needs to send and receive serial data at precisley timed intervals to a machine. It also has to send and receive WebSocket messages, which are lower priority, as just used for reporting and aysnc updates.
The websocket servicing is done via an event handler. The serial processing is being handled by Timer ISRs.
Question is...
If a timer ISR does not complete before the next timer interrupt what happens? Does the ISR execution abort and restart? Or does it block the next ISR? Or does it queue the ISR for execution (essentially blocking all other execution)?
So for example, if you have an timer ISR that runs every 100ms, and that ISR has to calculate a checksum and send 10bytes of data down a serial wire, what happens if the ISR takes longer than 100ms to execute?
Should not happen. Demote programmer to kitchen dishwashing, he's not thinking.
Seriously, though, interrupts must be structured to do something small, then return. A superficial example might be to increment a counter, then return, while loop(), or some function it calls, takes note of the change in counter value and does something appropriate.
In your case, I think you have a fundamental problem. It sounds like your communication with the machine is coded within your interrupt structure, and that sounds like a recipe for failure.
Please tell us more about that aspect, so we can be sure that that approach is necessary.
So the situation is thus, the machine that is being interfaced was never designed for 'smart' operation. It doesn't have a protocol, it just has a controller that sends serial control message every 100ms to the onboard chip (which control the machine functions) and the onboard chip send a serial status message back to the controller every 100ms, seperate wires. However the machine has some sort of fail-safe that if it does not receive a control message for 100ms it shuts down, so these messages have to be precisely timed. By taking apart the original hardware controller, it looks like it handled these with seperate ICs, so never had to deal with timing contention.
On the Arduino side what I'm stuggling with is handling the processing of the status message (from the machine) which requires waiting on the correct sequence of bits to be received on the RX line (which can only be proved by continuously calculating the checksum), meanwhile still sending the control message within 100ms.
Here is where my logic may be flawed, but the only way I can see to do this is to handle both within the interrupt, because if the RX message processing is running in the normal loop then it will almost always miss data bits on the bus while the processor is off running the TX interrupt. However when I put the RX message processing in the interrupt it sometimes takes longer than 100ms if it 'missed' the start of the status message frame coming from the machine and has to wait for a whole new status message to come down the wire.
Any thoughts on how to acheive this? Or is it actually not possible with a single processor?
Serial is done with a dedicated hardware UART. You shouldn't have to meddle with bits, inbound or outbound, that's the UART's job. However, your code needs to be extremely responsive, monitoring the Serial.available() function watching for an inbound character, which presumably is the start of the message, at which point your code needs to be able to reliably process message, and send new one. All of that should be eminently doable, with the right code.
What I don't know(unfamiliar), is whether your Websocket (library?) is blocking in any sense, because that might interfere with handling serial messages. I would be surprised if it's blocking, but one never knows.
You haven't mentioned what Arduino you're using; it could make a huge difference; some of the more advanced products are dual-core, which would make your task simpler.
It's an ESP32. Websocket Library is ESPAsyncWebServer.
I thought UART was the answer (therefore being able to use the hardware read buffer to avoid bit loss while doing TX) however the protocol is non-standard (as far as I'm aware). The start of a message is signaled by the wire pulled low for 7.5ms, then a 1 is represented by low for 1.5ms, 0 by low for 0.6ms, with 0.75ms between the bits. There is no end of message signal. Can a UART be configured to work with this?
If not, I wondered if there is a way to generate a hardware interrupt from the 7.5s low signal on the RX pin? Problem being, you can't just use a normal falling edge interrupt othrewise that will trigger for every bit of the data message.
That gives you opportunities for multitasking using the ESP32's built-in FreeRTOS. With careful design of task priorities, inter-task communications, etc there shouldn't be any problems.
The scant details on the comms protocol that you provided tell me that it works at a glacial pace relative to an ESP32. And, 100ms is an ETERNITY. Please provide complete details on the protocol ... message format, length (fixed variable), etc.
Unlikely, but if you provide the requested protocol details, maybe there's a way to do it with the ESP32's Hardware RMT Peripheral.
Ahh, prehistoric non-standard serial. Now I see the reason for the custom handling. Yeah, I doubt you'll get anywhere with a UART. So, the custom code is the solution. Haven't had to do that, I'll leave advice on that to others, but it would seem to imply you'll be working hard to get this to work without Websockets interfering.
So I'm totally new to the RMT peripheral, but if I'm reading correct, it essentially allows you to provide a pulse array (state, time) to the peripheral, then it will go off and transmit it entirely in hardware?
If you transmit 11-byte data at Bd = 115200 to your machine, then the required transmission time is: 1/(115200) x (11 x 10) = 955 us; whereas, your time tick is 100 ms. You have a lot of time there to acquire and process the incoming data/message. How many bytes of data you expect to receive from the machine?
Let the wavesocket message be haldled by interrupts. Data exchange with your machine could be handled by polling stratgey in the loop() function.
void loop()
{
if(100 ms has elapsed)
{
reset timer flag and re-load timer with preset value
send data to mmachine
if(message is coming from machine)
{
receive incoming message and process
}
}
}
This would be true if it were operating at 115200bd, but as noted above it's running a proprietary protocol which is significantly slower. So slow in fact that it would not be able to transmit and 'all 1s' message within the time window. Each 1 transmitted takes 1.5ms plus the inter-bit wait of 0.6ms means it takes a total of 2.1ms to transmit a single binary 1. 10 bytes, 8 bits per byte, 80*2.1= 168ms PLUS the preamble, 7.5ms high, 3ms low = 178.5ms to transmit the worst case FF FF FF FF FF FF FF FF FF FF message.
Now, whether by accident or design this is never a problem as 7 of the bytes are binary switches and only ever 0000 0000 or 0000 0001 so it just fits within the transmission window, but only just. So unfortunately in this case, I've got 100ms to transmit something that takes almost 100ms to transmit. So any processing interrupt is going to break it.
I think handing it off to the RMT is the only way.
Well, the protocol is so slow you could probably bit-bang it in an FreeRTOS task on the ESP32, as long as you carefully thought out the timing requirements and relationships between all the tasks in the project. But, I'd definitely start by attempting it in the RTM.
It's a roasting oven from a Chinese manufacturer and came with a wired controller, that controller is now broken and no spare parts available. Nothing about it is standard
To give you some idea of the quality of design/build the controller connects to the oven via a USB connector, but it's not USB, it just uses the wires for the protocol described above and power. I guess USB ports were the cheapest 4-pin port they could find!