I was just about to change a 10 second delay I have in a program to a millis() alternative because as we all know "blocking code is bad!"....or is it. I suddenly stopped myself as I realised whilst the delay is happening I don't want the user pressing buttons and potentially causes the software to misbehave. i.e. I want the 10 second delay to run until it's finished without any interruptions.
Does this make sense or should I still use millis() and find another way of preventing buttons being pressed (sorry I've not posted any code but it's a very large program)
What I do, is use a state machine. When it's in the "delay" state using millis() will let it do other stuff one day if or when you ever need it to, and time it out of that state into some other state when the "delay" is over. But to prevent any issues when the users poke the buttons and so on, just don't react to those buttons in the "delay" state.
That said, if there's no harm in using delay(), use it. It does mean your code's not future-proofed of course, which might one day, who knows, mean a major do over is needed.