A general multiprocessor system.

What doesn't fit on one processor might fit on more than one.
For me that's meant communication coordination and build it to fit the app when you get there.

I've gotten back to coding state machines and asynch event driven code and it's good but I also run into situations where the jobs ended up on Mega2560's dues to "won't fit" on an UNO.

So I looked into my bags of tricks, old and new, and have an idea to split tasks between AVR's more easily than I knew before.

I figure to hang all of the chips on two serial buses with all of the chips able to read one all the time and write to the other in turn. One master chip would write to the read channel and read the write channel. Other comms can be used but the generic would use the two.

The project code would be spread amongst the AVR's as tasks triggered by commands and parameters passed on the read channel as if they are cases in a state machine switch-case except not all have to be on the same AVR and no state task setting up the next state needs to know where that is located. It just needs to know the command to run it.

That's what sticks to the wall so far anyway. It moves program/hardware limits out to what has to fit on one chip and/or runs up speed limits. But it should make tasking and bigger code easier. I can even think of situations where adding code could literally mean adding chips without changing anything loaded on the original collection.

Like say you had a MUD and wanted to extend it....

Designed for the job
I spent ten or more years programming these things in occam (our biggest had around 100 processors), which probably explains why my C is so poor!

GoForSmoke:
I figure to hang all of the chips on two serial buses with all of the chips able to read one all the time and write to the other in turn. One master chip would write to the read channel and read the write channel. Other comms can be used but the generic would use the two.

Why this instead of SPI? In SPI, the “master” could basically act as the relay between all the “slave” nodes for two-way comms node-to-node. Maybe your method might be quicker, but it would be a question of whether your serial code would be fast enough to make up for the relay method of SPI?

Or - maybe you could use something like the BlackNet bus instead (hmm - I wonder if you could bastardize SPI comms onto it?):

http://www.romanblack.com/blacknet/blacknet.htm

AWOL:
Designed for the job
I spent ten or more years programming these things in occam (our biggest had around 100 processors), which probably explains why my C is so poor!

I wanted even a Transputer PC card back then and what I read about asynch code fired me up.

But those are way beyond what I'm positing and not just in the number of links.

Wasn't there a 100 node Transputer machine named Alice, or was that a different one.
And IIRC the Connectivity Machine was Transputers with dynamic links in hardware.

This what I propose is almost the opposite. One lane down and one lane back, not the fastest but a lot of potential for growth.

I've still not picked a bus to start with.

cr0sh:
Why this instead of SPI? In SPI, the “master” could basically act as the relay between all the “slave” nodes for two-way comms node-to-node. Maybe your method might be quicker, but it would be a question of whether your serial code would be fast enough to make up for the relay method of SPI?

Or - maybe you could use something like the BlackNet bus instead (hmm - I wonder if you could bastardize SPI comms onto it?):

"BlackNet" many serial devices on 1 wire

Some things he does, I would not.

SPI is not beyond what I want. I just haven’t thought it through. There would be more SPI clock divide than usual because keeping up with the stream should only be part of every processor’s workload.

I have thoughts that there should be one line to signal each new message starting. That would allow processors to ignore the bulk of messages not covered by them, they would just wait for the next signal.

What would work like state in a single processor state machine would be a byte or two ‘command’ with possible parameters put on the bus for all to read and the one(s) with the matching code would run it.

This would be for what has to be extremely fast though it doesn’t mean no processor used can be optimized or have its own state machine(s). What processor has what code doesn’t have to random. The system should allow it though and that’s the part I want to make and share.