Why are there no 8-bit CPUs with GHz clock-rates?

Most 8-bit AVR chips have a maximum clock of 20MHz. For many applications that's enough, but it made me wonder: why do even modern 8-bit CPUs lag so far behind 32-bit CPUs in terms of clock rate?

I can see why 8-bit CPUs lack sophisticated features to deal with several program instructions simultaneously, but is the difference entirely down to pipelining? Or can 32-bit chips carry out a complete instruction cycle in less time than an 8-bit chip? And if so, how?

Please excuse the slightly OT post. I hope the subject is of some common interest. I'm sure this is basic stuff for any computer science student, but digging around on the web I didn't turn up any answers.

Hi, the simple answer is they dont need high clock speeds,
8 bit mcu's get used in all things from tv remotes to washing machines, yes some higher end industrial machinery, basically for control purposes only and don't require the computational grunt that say a hames console would

I see your point, but I don't think that can be the whole story.

Fast 32-bit CPUs have been around for a long time now. They're cheap and widespread. And you'd expect modern 8-bit CPUs to be built along the same design principles: same transistors and so on, at least for the CPU core. All other things being equal an 8-bit core is much smaller and simpler than a 32-bit core, so you'd expect it to be at least as fast.

Now it doesn't make sense to design a hugely complicated 8-bit CPU with parallel instruction execution and so on, so overall performance will be less. But equally it doesn't make sense to artificially limit the clock rate when a minority of applications could use that extra speed (the same principle has driven PC development for decades).

So is the difference due to CPU complexity - do 32-bit CPUs execute 50 instructions in parallel to get their 50x higher clock rate?

Or are 8-bit CPUs built along fundamentally different lines for some good reason?

I think that pretty much is the whole story. The applications don't require higher clock rates - if they did, there would be a higher clock rate 8 bit MCU on the market, because as you point out the technology is there, and if there was a buyer then a seller would fill that void.

GHz microprocessor (not microcontroller!) clock rates are achieved by many sophisticated techniques - pipelining, caching, branch prediction, etc. These techniques require lots of silicon and lots of engineering. They also require more sophisticated compilers to take advantage of the specialized hardware architecture. Since it's a microprocessor, that means a pile of support components to make a functional computer out of it. It's a larger, more complex and more expensive system.

Users with microcontroller applications that require more computing horsepower than an 8 bit microcontroller can provide are using 16 and 32 bit microcontrollers (ARM, anyone?) for those applications.

-j

Also need to consider power consumption and heat generation. The Atmegas run happily on very little power, and don't waste it heating the environment, so can run in enclosed spaces without needing heatsinks or fans.

I would imagine it comes down to return on investment. To design the modern 32/64-bit chips takes a large staff of hardware design engineers to design the chip. To get the highest speeds you need to shrink the chip smaller and smaller, which means newer fab systems, that start out in the n-billion (thousand million outside of the USA) range. To achieve the speeds, the chips do not execute an instruction sequentially, but instead pipeline the whole process (having decode, execute, write back stages that each may be several internal cycles), and you typically can many instructions in flight.

The 8-bit chips typically are bought very cheaply compared to the higher end chips, and most customers do not need the extra speed. If you are doing something waiting for a button press, you don't need the speed. If you do need the speed, you would typically go to a higher priced chip.

In addition, if you look at your average program, the 8-bit chips often spend a lot of instructions to do the equivalent of 16 or 32-bit instructions on higher end platforms. So if you need speed, you get some bump going to a 32 or 64-bit platform directly.

Interesting question, and of course no one answer. It occurs to me to think about it in multiple dimensions. Such as speed, word size, application complexity, power consumption, interface requirements, memory requirements, storage requirements, various costs, ROI, etc. etc. Think about the "solution space" represented by these multiple dimensions, and the real-world problems that need to be solved. I suspect that things are not evenly distributed, that is, there are sweet spots around which the solutions tend to cluster.

tim7:
I see your point, but I don't think that can be the whole story.

Well, then, let's start listing out applications for 8-bit mcus with very high clock rates that wouldn't be appropriate for 32-bit cpus.

I can't think of one.

Re. motivation: any project with significant investment in 8-bit architecture (either by code or experience)
looking to make incremental improvements might benefit from a faster 8-bit CPU.

But I prefer to put the question the other way around: given the prevalence and cheapness of fast CPUs, why are modern 8-bit CPUs still relatively slow? I'm sure the designers did not turn their back on the available technology without a good reason. So what is it about their design which stops them running at the same clock rate as other recent processor designs?

Re. power and heat: look at mobile phone CPUs, already well into the GHz range. Power-saving modes mean there's little to gain by using slower processors -- in other words the energy consumed per operation does not vary much. But you're always free to use a slower clock should you desire.

Re. parallel execution: I agree there's no sense in pipelining an 8-bit processor. I'm just wondering how much of the speed difference that accounts for. Surely not all of it, 50x or more?

Re. cost and fabrication: The 32-bit Raspberry Pi on my desk cost less than the 8-bit Arduino Ethernet sitting next to it. I don't know the costs of the CPU/MCU chips, but they can't be wildly different. As for return on investment, well I bet more 8-bit chips are sold than 32-bit. They're absolutely everywhere, and even used as peripherals to 32-bit processors.

There may be a clue in the voltages the CPUs need: the ATmegas need 5V to run at 20MHz, or 1.8V at 4MHz, whilst the ARM core in the Raspberry Pi runs at 700MHz with 1.2V and the maximum recommended voltage is something like 1.4V. So it sounds like there is a fundamental difference at the transistor level. Perhaps reliability has something to do with it. Anybody know?

You are way over analyzing the situation.

It is as simple as, what applications require such performance? Companies do not create products just because they can. Instead, they create products that people will buy.

Again, what are some actual applications that an 8-bit processor with much higher clock rate would serve, that 32-bit processors wouldn't?

James, you misunderstand me. This is a purely technical question motivated by curiosity about the inner workings of microprocessors. I'm not saying Atmel should upgrade their 8-bit chips; I'm wondering why a modern chip doesn't "just work" at higher clock rates, irrespective of demand or application.

  • In 1975 when the 6502 was designed, transistors were big and slow. Result: a processor which ran at 1 or 2 MHz.
  • In 1996 when the first AVR was designed, logic transistors were capable of 200MHz. But AVRs do not run at this speed. Why not?

It could be due to memory speeds, it could be for improved reliability, it could be for compatibility with the numerous on-board peripherals a micro-controller needs, it could be that slow transistors are better in some other way, or it could be something else entirely. I thought the answer might be interesting, and that somebody here might know.

tim7:
James, you misunderstand me. This is a purely technical question motivated by curiosity about the inner workings of microprocessors. I'm not saying Atmel should upgrade their 8-bit chips; I'm wondering why a modern chip doesn't "just work" at higher clock rates, irrespective of demand or application.

  • In 1975 when the 6502 was designed, transistors were big and slow. Result: a processor which ran at 1 or 2 MHz.
  • In 1996 when the first AVR was designed, logic transistors were capable of 200MHz. But AVRs do not run at this speed. Why not?

It could be due to memory speeds, it could be for improved reliability, it could be for compatibility with the numerous on-board peripherals a micro-controller needs, it could be that slow transistors are better in some other way, or it could be something else entirely. I thought the answer might be interesting, and that somebody here might know.

The maximum speed a given chip can operate at is a function of the design of the chip and is one of many design trade-offs the manufacture decides to use in making up the design specifications for the chip. Speed is also a major factor of how hot the chip will run at and the chip must be designed to be able to dissipate that heat when ran at it's maximum rated speed. If a given manufacture thought they could make a profit designing and building a 8 bit micro that runs a Ghz speed then they would have done so by now as the technology already exists. Such a profitable market therefore must not exist at this time if ever. Such is the rule of supply and demand in a free enterprise market. They cannot be expected to undergo the large development costs of such a product in the hope that it will sell in profitable quantities, there must be valid market research done first to see if such a development risk is worthwhile or not.

I know for example that Motorola designed simple 8 bit micro-controllers and some support chips in the 80s that could operate at up to around 350 degrees F. This was aimed at oil and gas industry for use in drill bit real time monitoring instrumentation applications. It was a limited market but they felt that they could sell enough at a given price to make a profit, but they certainly cost way more then normal commerically rated chips for normal industrial temperature ranges.

Lefty

It isn't a technical question, and it doesn't have a technical answer - the answer is simply
"because there's no market for them".

  • For faster clock speeds the fabrication has to be smaller (more expensive) and voltages must be lower as well. Just like the insulator on a wire, thicker insulation allows higher voltages. If you speed it up then you lose the ability to interface with 5V devices directly.
  • AVR chips have AD converters. Apparently this requires the larger/slower fabrication process.
  • Smaller fab processes leak more current. One of the nice features of the AVR chips is that they aren't power hungry. You'd give that away if you sped it up.
  • Cost :wink:

I don't see it as an 8bit vs 32bit problem at all. Just look for any 5V capable microcontrollers over ~50Mhz with AD conversion.

tim7:
James, you misunderstand me. This is a purely technical question motivated by curiosity about the inner workings of microprocessors.

Thats the issue though. From a technical perspective, there isn't a reason it couldn't be done. In fact there are probably prototypes on a shelf somewhere.

The reason you dont see them is that there isn't a market for them. (or a large enough market).

There is no technical reason why you can't make an 8 bit CPU at GHz clock speeds. You can make a 32 or 64 bit one, so you just reduce the bus width.

The question is, what would you use one for?

The faster you vibrate electrons in a material, the more energy the dissipate as heat. This means that a 1+GHz processor could not be used in a low power application due to the energy wasted (part of that is smaller transistor size though), nor could you use one without a large heatsink to dissipate the generated heat.

A hobbyist couldn't in all likelyhood use one because with a 1GHz clock, even a 1ns propogation delay would be a missed clock cycle. This means you would need a precisely designed circuit board with all traces in a databus exactly the same length.
Breadboards would be totally out of the question, so would protoboard. Even a slight amount of stray capacitance would essentially mask a 1GHz signal.

Such high speeds require expensive manufacturing methods, meaning a £2 avr at those speeds would be out of the question - think £50 and up, just for the CPU and that still leaves you requiring fast enough program memory, ram, peripherals, etc, which wouldn't be on chip.

For a computer, performing calculations using 8 bits would result in an incredible performance reduction (unsigned long long math takes about 100 instructions on 8 bit, whereas it would be about 8 maybe less on a 32bit processor), meaning they would simply not be suitable.

If you are going to go to the effort of designing a high speed microcontroller, why stick to an 8 bit bus, when you can easily use 32 or 64 bit (the technology is already there). It would be like building a ferrari and sticking a moped engine in it.

20MHz isn't the limit though. Look at the atxmega series. 8 bit microcontrollers, with similar peripherals to the atmegas, but running at 64MHz.

The great part about the avr is its very general purpose, easy to put in many situations
Faster cpu(like the rasberry pi) needs buffering and many other supporting hardware just do do one purpose,
if all you had was a processor you couldn't run leds straight off the pins(with resistors) or directly interface with alot of things
you can run a 328 off a 3.7lipo, with capacitors, reset switch, and a few leds all attached to the back of the physical ic
id like to see anything faster than 20Mhz do that
also when's the last time you saw a wearable 2Ghz board? The avrs can be put in pretty decently rugged situations and still work reliably
lmao speaking of seeing things, have you ever seen a breadboarded 20+Mhz processor?
the lower speed definetly allowed a more rugged ic, less picky, and more versatile
To raise the speed would lower the market even to applications that need it and further applications that are in a stable enviroment

Microcontrollers [typically 8-bit] are intended for low-power, relatively simple
embedded products at are sold in very high volume. That's how Atmel and others
make billions of $$$ each year, not by selling one-sies to hobbyists. They're not
trying to compete with Intel for PC-level applications, they want to sell chips to
go into millions of products, like automobiles.

If you look at the Atmel AVR and the Microchip PIC product lines, you see dozens
and dozens of different controller chips differing in only a small way between each
other, and with prices that differ by only pennies. You might wonder why. The idea
is that people can choose the most cost-effective chip for their particular embedded
app. They may only save a few pennies from using a bigger faster chip, but the
pennies add up when they sell 100s 0f 1000s of devices.

Simply ask yourself this:

  • Who would want to spend the time and money designing a 1GHz 8-bit CPU core when there is already 1GHz+ 32/64-bit CPU cores on the market at a decent price?

It would be like taking a Yugo 3-cylinder car and putting a nitro kit in it. Yes, it would go faster, but what would be the point? You'd just look like a tit. Yes, a faster tit, but still a tit.

If you need the greater speed of a fast core, you don't want to then cripple it with narrow data paths and tiny registers.

After a little more digging, I think Chagrin may be closest to the mark with his comment about fabrication processes.

Atmel's fastest processors do in fact have on-board ADCs, but what they don't have is PicoPower and high-voltage tolerance. Both of these features would require thick oxide layers, which in turn requires larger and slower transistors. Layer-thicknesses cannot easily be varied across a single chip, so if one part requires thick insulators (the IO ports, for example) the rest of the chip is condemned to use the same technology.

That would also explain why ATmegas need 5V to reach their maximum clock rate, whilst other designs can manage with 3.6V or 1.8V. As Tom mentioned there are faster 8-bit processors, and significantly these chips also require lower supply voltages.