Go Down

Topic: My top 5 wish list for improvements to the Arduino core library/IDE (Read 3 times) previous topic - next topic


@westfw, thanks for yor feedback.

At a 5% ISR budget, you probably start to allow the sorts of functions that a moderately careful programmer probably expects to be able to put in a timer tick callback, but now you're introducing about 50 microseconds worth of random latency that can occur between any two statements in the main loop() code, which is pretty significant, and you have to start explaining to people why their sketches are acting sort of random.

What makes you think that 50us or even 100us of random latency would affect most sketches and make them 'act sort of random'? As I said in my comment, it all depends on whether you are trying to do precise timings using delayMicros or similar - and if you are, you already have the (smaller) random latency of the existing tick ISR to contend with. Any code whose timing is that critical should be executed with interrupts disabled.

I guess what it really boils down to is that providing the ability to tie into the timer tick makes it possible for ANYBODY to put code in the ISR path, and I don't TRUST most people  to do that well enough that it won't cause more problems than it solves.

You might as well decide not to trust people to do direct port access, or to change the analog reference, or to write C code. Surely Arduino isn't about preventing people from doing things, it's about providing ways of doing things easily. As things stand, if I want a 1ms tick (perhaps to poll a rotary encoder), I have to set up a tick interrupt using timer 2. Having two 1ms tick interrupts going off uses more CPU time in ISRs than having one, because of the extra context save.
Formal verification of safety-critical software, software development, and electronic design and prototyping. See http://www.eschertech.com. Please do not ask for unpaid help via PM, use the forum.

Paul Stoffregen

I agree, at least partially, yes, a periodic callback can easily enable a programmer to burn far too much CPU time.  But does that really mean it shouldn't be available at all?  If they really need to check something periodically, consider the alternatives.

If it's polled in loop(), together with lots of other stuff, the polling has tremendous jitter, and the worst case times can become really horrible if Serial.print() or some other function blocks.  The Bounce library is a great example, where failure to keep each loop() execution short enough could possibly miss a short duration button press entirely!

Given the choice between a novice using a library that silently burns 6% of the CPU time for highly reliable performance versus reliability hinging upon that novice user's ability to constrain the total worst-case time that loop() runs, I'd go for highly reliable easy-to-use libraries.

Today, that choice is already made in many libraries, which commandeer the timers for this sort of purpose.  Perhaps they use slightly less CPU time than they otherwise would, but the trade-off is interfering with PWM on certain pins, incompatibility with other code using those timers, and lack of portability between chips.

There are a lot of trade-offs.  I don't have any perfect answers that avoid all potential undesirable outcomes.  But I do believe a periodic interrupt-context callback can be useful, especially for library authors.  Just because it can't be done perfectly doesn't mean it isn't worth doing.  I believe, despite imperfections, it could offer a much better alternative to how some things are currently done.


What makes you think that 50us or even 100us of random latency [is bad]
it all depends on whether you are trying to do precise timings

For example, if you start disabling interrupts for 50us (either in the ISR or in the loop code because your timing is critical), you start running into significant chances of losing characters on the serial port.  So maybe you enable interrupts during the callbacks, as Stroffregen suggested, and then you have increased possibility of stack overflows leading to (difficult to analyze) memory overflow bugs...
And I'm claiming that by the time you allow 50us ISRs, you start interfering with things that were less that "precise timings."
After all, we have people complaining about 30second/day inaccuracies in millis() (0.03% error)...

a novice using a library

What I'm really worried about is a novice using two or three libraries, each of which was sure they could use 4% of the CPU in the ISR callback, and then having other things go wrong.  In essence, that's the same problem that exists now with alternative implementations using timer2/etc.

I like your idea of having the ISR round-robin between callbacks that are called less often than every millisecond.  Part of my worry is that a 1ms ISR is really too frequent to make generally available.  A lot of system timer ticks are much longer than 1ms;
4 ms in cisco IOS (and there was a specific reason for making it that short, or it would probably be longer!), ~20ms in MSDOS, 60Hz on assorted mainframes...  Likewise, I'd be more enthusiastic about a general purpose timer/periodic callback that left the actual implementation to the core authors.

You might as well decide not to trust people to do direct port access, or to change the analog reference

And I don't.  Lots of existing libraries are essentially dangerous because they do this sort of thing, and  the people that use them  don't understand.  And I agree that it should be allowed anyway...

Surely Arduino isn't about preventing people from doing things

Arduino is (I think) somewhat about hiding the "dangerous bits" away from where "people" need to worry about them.
An ISR callback would be counter to that philosophy.  We used to say, back at work, that we gave our users a lot of rope, and if they happened to tangle it around their neck before tripping, well...  they should have understood that that was a possibility.  I don't think that that's quite the way the Arduino team thinks.   They don't like to implement parts in the actual Arduino APIs that they have to say "you CAN do this, but you need to be really really careful."

Having two 1ms tick interrupts going off uses more CPU time in ISRs than having one, because of the extra context save.

It actually really close.  21 instructions for the ISR context save/restore, and 15 additional to permit the callback...

Paul Stoffregen

You're right about the danger of keeping interrupts disabled too long.  If the callback is made from interrupt context, interrupts really need to be re-enabled.  Yes, that uses even more stack space, possibly risking an out-of-memory collision, but there really isn't much other option for interrupt context.

However, another option would be to avoid interrupt context calls completely!  That decision was made for the new serialEvent.  The huge advantage is the user doesn't need to use volatile variables and carefully disable interrupts or use some other design that assures atomic access to shared data structures.  Those are such difficult topics that normal context callbacks are pretty compelling, even if their latency is much worse.

To be useful, the callback would have to be made from a number of places that block, and maybe even some that don't if users build long-lived looping structures calling them.  From delay() would be the most obvious.  The blocking in Serial.write() when the buffer is full would be another obvious place.


Yeah, I sorta feel like allowing callbacks from the timer is a weak solution when what ought to be implemented is the beginnings of some kind of operating system kernel.  Or at least something that would generalize to also being implemented in/by a microkernel.  Implement a 1ms timer callback function and you are forever committed to a 1ms timer callback function.  Implement an asynchronous task with timed wakeup, and you have a lot more choices...  The serialEvent() implementation is a reasonable example;  I'm not crazy about the details of the current implementation,  but the details are subject to being changed, and I don't feel tied to something that is overly specific to particular hardware.

Go Up