I still have the problem with SD logging. SD cards need to reorganize from time to time and that can take up to 140 ms. For debugging I can log all data in every loop. But for real running I simply disable logging.
Some things will just take longer. I haven't delved deeply into SD, just use a library. It would be nice to interleave small tasks while waiting for the SD chip to finish housekeeping though and I am sure it can be done but may require extra buffer space.
... but anything that takes as long as an analog read gets a return to start loop() again.
Can you explain what you mean by this? Have you split the analog read in 2 actions?
No. But if I have say 4 analog sensors I read 1 at a time through loop() allowing other short tasks to run between analog reads. Like if I want Blink and 4 reads, the blink time gets checked 4 times for every 1 time that all the sensors are read, that is 4 times as responsive. I am giving the analog reads a lower priority.
Hey, I usually put Serial I/O last.
With Serial input I read and operate on each character as it comes in. I don't buffer and then parse and lex, I analyze on the fly to match commands and evaluate digits. It saves time by not wasting it and IME (in my experience) it's not more code than buffer and crunch, just different and faster.
For me it's about response and having my time-checks close. I do use micros() instead of millis() for fine work. A lot can be done in 1000 micros().
I agree lots can be done in 1000micros, but IMHO for most applications millis() is more than enough. I mean if a project has 5 (software) components each taking 2 millis() for a loop that is 10 millis() for the total project loop. That is still 100 times a second. In my experience that is more than enough for the average project and for most expert projects.
Do what you want. I write so that components are as unrelated as they can be and that includes order and priority of operation. I find state machines to be a more flexible approach trading nested levels of hard-coded logic for soft links via variables and pointers... and yes I have used tables of function pointers to organize code as well. It makes it easier to add/rearrange components and make major changes in how a whole package works. It is a far more spaghetti-resistant approach.
What I would really like is true parallel processing (and can achieve that with multiple AVR's if I take the time and have a task worth the effort). Until then I can can code to use the principles of parallel processing on a single chip.
So each time through loop() I run all necessary checks and actions and a slice of a task and that's it. Loop() keeps running, the tasks get done and it's dead simple to add tasks.
I coded for money for 19 years and school/hobby almost as long and one thing I most desire to avoid is a lot of deeply nested code. Consider Bauhaus vs Art-Deco.
I believe there is a timeout on the upload execution, the disadvantage of this is that you start loosing uploads if things take a little bit longer than expected.
How late is tolerable? Can it trigger an interrupt on data ready?