Their answer is probably "I want to be able to do several things at one time." That's probably what their answer should be. That's the way most desktop users think of things as well.
If you can get some quantitative answers for the value of "several", and the limitations (timewise) of "at once", you'd be doing really well. Consider:
Your almost certainly right.
I have experience with two types of users, research physicists, and EECS students from UC Berkeley. Both groups are comfortable using a RTOS. Older EEs not so much.
I am a PhD physicist and I have done architecture and design of control systems for many large experiments. We started using RTOSs about 40 years ago.
I did some work on LHC, the big experiment at CERN looking for the Higgs Boson. CERN has used LynxOS in control systems for over twenty years.
Some of my colleagues left the lab to develop VxWorks. NASA JPL uses VxWorks in all Mars rovers.
I find it hard to understand the resistance to use of RTOSs.
Maybe a tutorial with more realistic example application would help Arduino users understand the value of RTOSs. Cortex M clearly was designed for use of an RTOS.
On the other hand, unless you understand things like reading an ADC at relative slow rates, like 100 Hz, requires a time jitter on the order of one microsecond if you want low SNR in the signal. The SNR of an ideal 10-bit ADC is about 62 dB. At 100 Hz, 4 microseconds of jitter in the reading time reduces the SNR to about 52 dB. A coop scheduler just won't schedule a thread with low jitter and reading a sensor in an OS thread is easier than setting up a timer driven ISR.
*will* required, or *could* require? I find that number hard to believe, depending on what it is measuring. Although... CPU time is probably not the bottleneck resource, so it probably doesn't matter.
The measure generally means CPU time. A RTOS has no extra overhead unless you call a OS function or an event causes a context switch. You don't do a context switch for every interrupt and many fast interrupts can be handled just like the bare metal approach without any OS overhead. For example a serial driver puts bytes in a queue just like the Arduino drivers.
On a chip like a 72 MHz STM32, a context switch with ChibiOS costs just over one microsecond. A well designed application should not have more than a few thousand context switches per second so the overhead will be a few percent.
A preemptive RTOS responds to an important high priority event with the handler thread running in one microsecond but with a coop scheduler, who knows.