If someone asked you how "fast" an arduino is.........what would you compare it to? Say you were talking someone not familiar with microprocessors in general.
Could it be roughly compared to some computer of the past? Say an early 8088? An old Atari/Commodore/Timex Sinclare/ect...?
The 8088 cpu used in the original PC had a CPU that took 3 cycles of the 4.77MHz clock to do something like a register to register ADD instruction that the Arduino does in 1 cycle of its 16MHz clock. Of course, the 8088 had a lot more memory, and a more complex instuctions set, and 16bit adds were the same speed, but the 12 times number should be about right.
The "4MHz Z80" used in CPM machines prior to the IBM PC took 4 clock cycle for similar operations, so it was about 16x slower than an Arduino.
My first Apple II had 4k of memory (storing programs and data) and ran at 1MHz. Arduino runs 16 times faster and the latest versions have 32k of program memory and 2k of RAM. But comparing computers with different architectures is like comparing apples to ? (well, you see what I mean).
But if you want to express some idea of how fast Arduino can be, it can switch a pin on or off (using direct port io) in less then the time it takes a light beam to travel 20 meters.
My first Apple II had 4k of memory (storing programs and data)
That raises a very interesting point too. The ATmega's Harvard architecture (separate data and program memories) mean that it can be fetching the next instruction concurrantly while executing the instruction it's on even if the current instruction has to do IO. That can also give it a nice speed boost over the 6502 in the Apple II and the 8088 family chips.
and ran at 1MHz. Arduino runs 16 times faster and the latest versions have 32k of program memory and 2k of RAM.
I took a fourth year computer architecture course that can be summed up as "clock speed has nothing to do with computing power".
So (work/instruction)*MIPS? Except... how is "work" quantitized? It doesn't seem as straightforward as measuring joules. Well, it might be... a fast chip might be considered a hot one.
You begin to see the dilemma in performance metrics.
I adopted the Arduino as my pet microcontroller not because of the AVR's performance (even though it outruns the old PIC 16f84 by a factor of 3 or 4 - see "cycles per instruction", CPI, metric), but because of the development environment that comes with it.
IMO, the Arduino seems to hit the sweet spot for hobbyists. It's inexpensive, has reasonable performance, and has a very usable tool set. I tried PIC, but decent tools are expensive. And for the price range I was looking for, the Arduino offers much more performance. And you can't beat the price of the toolchain. I found myself doing much more work to get much less return with PIC.
So (work/instruction)*MIPS? Except... how is "work" quantitized? It doesn't seem as straightforward as measuring joules. Well, it might be... a fast chip might be considered a hot one.
This is where that whole course went. The Arduino and PIC chips have are RISC computers so each instruction does very little. A PIC16 series chip doesn't even have a multiply instruction, just 8-bit addition and subtraction. So on a PIC or Arudino it might take you tens-of-thousands of instructions to 64-bit floating point division which a modern desktop CPU can do in a single instruction. By that logic, a desktop running at 1MIPS might be considering 50,000 times faster than an Arduino running at 1MIPS. But not every instruction is a floating point division so that comparision wouldn't be accurate.
The MEGA is exactly the same speed at the regular Arduino (one if its weak points, I guess.) However, it has MUCH more memory; about 8x the program memory and 4x the RAM of the mega168 used in most Arduinos (4x and 2x compared to the mega328.) Plus more pins, times, and uarts, all of which can save a lot of CPU cycles compared to having to implement them in software or via IO expanders of some kind.