Keep `delay` based approach while doing things concurrently with `proto_activities`

New developers start using delay but find out soon, that to do multiple things at once, a different and more complex approach is needed like using millis to measure elapsed time to do part of the work when ready as well as state-machines to stitch all these parts together again.

proto_activities is a header only library which simplifies these tasks by concurrently running activities which can behave like separate loops allowing delays in each of them. With the help of preemption constructs, complex state-flow can be achieved in a modular and deterministic way.

A running proto_activities example demonstrating the approach can be found here.

3 Likes

This forum has a tutorial on multiple things at once.

2 Likes

Here is. how the StateChangeDetection tutorial would look like with proto_activities:
wokwi.

Can you explain why we would want to do this? As I see it, there are far better ways of doing this, as @sonofcy said.

What makes your library better? What does it do that the other normal way doesn't?

Not picking on you, asking productive questions here.

Define concurrent. I have yet in 50 years to see any cpu run more than 1 thing at a time. That means in order to pretend to do more than one thing requires fast task switching based on some sort of priority scheme. I know of two (pre-emptive and co-operative), if you have invented a third, subject it to peer review.

3 Likes

How does it compare with

and

and

and the otehr things in

1 Like

I think it helps you to write code which is easier to understand, maintain and reuse.

This is achieved by explicitly declaring concurrency and preemption aspects which are otherwise done in a more ad hoc manner.

To make it short - this lib uses a co-operative approach to concurrency.

proto_activities is based on the same protothreads approach like some other libs (like ALib0) but is special in that it does not require the use of static variables.

Not something I would be interested in then, I held the world record for fastest transaction processor at one time and that only happened due to pre-emptive tasking.

1 Like

Intel's hyperthreading does that. It can run multiple threads of execution on a single core.

And depending on how you look at it, it's possible to run delay() (or a protothread-aware version of it) "concurrently" on multiple threads since it's not really running any CPU code while it waits. In other words, they all run concurrently in wall time rather than in CPU time.

They just renamed things and/or redefined things. My training started on discrete hardware, then micro code, then OS so I have seen all the underlying code and know how it works. It's still a binary world no matter what you label it as.

1 Like

This parallels the talking heads of cold fusion quackery of the 70s, 80s and 90s. More snake oil abstraction layers.

2 Likes

I've played with protothreads in the past, and the OP's library is a layer around it to make it easier to pass around context. In that way it appears to do what it's supposed to.

Then again, Rob Pike (of Go fame) pointed out the distinction between concurrency and parallelism (see Concurrency is not Parallelism). You can have processes (tasks, threads, whatever) that don't run simultaneously but are still concurrent. The point of concurrency is to decompose a problem into smaller independent processes that can run on their own (whether in parallel or not). From this standpoint, OP's library (and many of the other solutions offered) seems to fit the bill.

After using various threading systems (pthreads, C++ threads, and my own cooperative and preemptive threading libraries), I've fairly recently started using Go and its goroutines (and channels for communicating between them). It's a big shift in thinking about concurrency, and it really simplifies breaking down a program into concurrent processes. I think it would be interesting to bring that system to Arduino, rather than implementing yet another threading system. Or just use Go on Arduino (https://tinygo.org/). :smiley:

3 Likes

"Hyper-Threading doesn't mean a core is truly doing two things simultaneously in the same sense as having two physical cores. Instead, it's better understood as optimizing resource use by quickly switching between two tasks or threads when one would otherwise be idle.

In a traditional single-threaded core, there are moments when parts of the CPU are waiting, for instance, for data to be retrieved from memory. Hyper-Threading takes advantage of these idle moments by scheduling instructions from a second thread to keep the core busy. So, while it's not "literally" performing two tasks at the exact same moment in time (like two physical cores would), it's effectively multitasking by keeping more of the core's resources in use."

Also this:
How does my computer do several things at once?.

My understanding is that Hyper-Threading is basically an extension of concepts like pipelining and out-of-order execution (OoOE). If part of a core is unused at any moment, some part of an instruction from a thread can be executed in that part of the core at the same time (i.e., during the same clock cycle) as part of an instruction from another thread is running on a different part of the core.

Just as a single core can run two instructions like movl $1, %eax and movl $2, %ebx at the same time (not just "quickly switching between" them as your quote implies), a single core could run two instructions from two different threads at the same time.

I haven't looked at the technology too closely, but I imagine hyper-threading is basically the same as OoOE but with each instruction (or micro-instruction) in the pipeline tagged with the thread it belongs to so the core can execute it with the thread's context (set of registers, status flags, etc.).

This is referring to multi-tasking on a classic single-threaded CPU. It also applies to Hyper-threaded or multiple cores when more than one thread is running on a single physical or logical core, though kernels strive to balance threads across all cores to avoid that. IIRC, FreeBSD, once per second, moves some threads from the core with the most running threads to the core with the fewest running threads; this eventually balances them.

Edit: FreeBSD's ULE scheduler actually balances threads between two cores twice per second:

Twice per second the sched_balance() routine picks the most-loaded and least-loaded processors in the system and equalizes their run queues

(From https://dl.acm.org/doi/fullHtml/10.1145/1035594.1035622#:~:text=Twice%20per%20second%20the%20sched_balance()%20routine%20picks%20the%20most-loaded%20and%20least-loaded%20processors%20in%20the%20system%20and%20equalizes%20their%20run%20queues)

Go looks interesting but I would like to see more language support for structured concurrency.

Here is another example of proto_activities which shows, how goto-like state machines can be replaced by structured programming, leading to simpler to understand and easier to maintain programs.

One of Go's philosophies is to keep the language itself as simple as possible while still being useful and expressive. Many features have been proposed to be added to the core language, but most are rejected if they do little but add some syntactic sugar.

Structured concurrency would likely be considered "syntactic sugar" since packages like sync already provide most (if not all) of your structured concurrency needs (e.g., waiting for a group of goroutines to finish).