How about an Arduino style super computer the size of a credit card

Hi,

I came across a great super computer project for 99$. It is the Parallela super computer from Adapteva in Kickstarter. I was thinking what if one day there an arduino based programming environment at super computer level.

http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone?ref=category

Please support and save this project. It has 60 hours to go and needs to collect about 200,000$.

Regards
(Fan @ Parallela)

For anyone who has no idea what he's pitching here i'll sum it up.

In side your phone you have a small ARM chip, a computer 1ghz, 1.5ghz (single/dual/quad) as we see in the latest mobile devices phones, pads, tablets, etc... so why do i PERSONALLY like this idea ? and wont consider it spam on the grounds that it really is a great idea...

What this guy is trying to do is scale dual cores (or has done, just needs the backing, us, you and me) 1ghz processors and fit them onto a single CPU sized chip, the first one being 64 (dual core 1ghz) processors and fit them onto something the size of a credit card... but let's suppose this technique for doing all this works, 64/128/256/512/1024 dual core 1.x processors on a chip.... power/heat? well anyway if you can pull it off share the load of power, eg use 10 cores all clocked to 100mhz to reduce heat? to do the same job as 1 core at 1ghz... is it the same or indeed less heat?.. i'm going to read up more in a minute...

Great idea even if it is /spam/ :fearful:

How is this different from Raspberry Pi?

WizenedEE:
How is this different from Raspberry Pi?

In terms of raw CPU power, the raspberry pi is an entry level single core 800mhz processor, your computer is many times more powerful.

what "this" is....

64 1gig(dual core) processors on something the size of a credit card, for $99...

in other words, this is significantly more powerful than your desktop computer, but i would not class this as a "supercomputer" a bloody powerful one for sure, if the GPU was something decent like an power vr, or mali gpu, or even a Tegra... then omg, i'd dump my PC in a heartbeat and use one of these...

Cores 1 - 12 process Video Streaming,, cores 13 - 20 (all dual core 1ghz's..) dedicated to internet browsing, etc etc, 64 cores to choose from, all of them could be throttled down to 100/10mhz when idle, pr put to sleep, you could browse the net while video streaming and play a few flash videos with ease... The only thing i'd like to be certain of is having access to SATA or some kind of adapter, so i can access my hard disk, also memory, 2gig min and my PC's gone :slight_smile:

64 1gig(dual core) processors on something the size of a credit card, for $99...

Not quite. The Epiphany processor contains 64 single cores. The Parallella board contains an Epiphany processor, and a Zync-7000 ARM9 dual-core soc to interface the Epiphany to the rest of the hardware. Still pretty interesting though.

I can't figure what to do with 84MHz of 32bit ALU on th Due, and you want me to contemplate a 66gips multicore?

The problem with massively parallel computers is you need to have applications that have enormous data sets that you do all of the same processing on each subset of data. You need to be able to break this data down into subparts so that each processing element can process its own subset of data without having to get bogged down because you are waiting for data from other nodes.

Sure, there are are various things that are massively parallel (rendering for instance), but I suspect unless you already have an application that is massively parallel, it isn't the chip for you. When I was between jobs the last time about 4 years ago, Adapteva may have been one of the companies I was looking at, but it wasn't the right fit.

It doesn't have to be "enormous" volumes of data (except by 8-bit MCU standards): you could also do lots of processing on small-ish volumes.

Or do "soft peripherals" a la Ubicom (recently merged into Qualcomm): they used multiple cores to bit-bang interfaces like Ethernet (and even SATA, according to one story I read recently), instead of making many models with different specialized "hard" peripherals.

Let's say you want to do something like the Ardupilot: dedicate a core to each servo and each input sensor, and you never need to worry about your elevators wobbling if another part of the system is busy normalizing a temperature reading or passing data to/from a distant central PC. And encrypting the data link so no one can hijack your UAV or steal the data you're collecting.

Similarly, for ground robots, you could create custom servo and motor speed controllers in software, with high-level commands like "move forward at 3.2 feet per second" or "wag the servo between 85 and 125 degrees with a period of 3.2 seconds".

64 cores is more than you'd need for the vast majority of hobby-level applications, but I could see even a relatively simple robot using 16 or more.

Kickstarter is a spam, this guy is a spam. Let's not waste brain power on this post.