Why are there no 8-bit CPUs with GHz clock-rates?

The great part about the avr is its very general purpose, easy to put in many situations
Faster cpu(like the rasberry pi) needs buffering and many other supporting hardware just do do one purpose,
if all you had was a processor you couldn't run leds straight off the pins(with resistors) or directly interface with alot of things
you can run a 328 off a 3.7lipo, with capacitors, reset switch, and a few leds all attached to the back of the physical ic
id like to see anything faster than 20Mhz do that
also when's the last time you saw a wearable 2Ghz board? The avrs can be put in pretty decently rugged situations and still work reliably
lmao speaking of seeing things, have you ever seen a breadboarded 20+Mhz processor?
the lower speed definetly allowed a more rugged ic, less picky, and more versatile
To raise the speed would lower the market even to applications that need it and further applications that are in a stable enviroment

Microcontrollers [typically 8-bit] are intended for low-power, relatively simple
embedded products at are sold in very high volume. That's how Atmel and others
make billions of $$$ each year, not by selling one-sies to hobbyists. They're not
trying to compete with Intel for PC-level applications, they want to sell chips to
go into millions of products, like automobiles.

If you look at the Atmel AVR and the Microchip PIC product lines, you see dozens
and dozens of different controller chips differing in only a small way between each
other, and with prices that differ by only pennies. You might wonder why. The idea
is that people can choose the most cost-effective chip for their particular embedded
app. They may only save a few pennies from using a bigger faster chip, but the
pennies add up when they sell 100s 0f 1000s of devices.

Simply ask yourself this:

  • Who would want to spend the time and money designing a 1GHz 8-bit CPU core when there is already 1GHz+ 32/64-bit CPU cores on the market at a decent price?

It would be like taking a Yugo 3-cylinder car and putting a nitro kit in it. Yes, it would go faster, but what would be the point? You'd just look like a tit. Yes, a faster tit, but still a tit.

If you need the greater speed of a fast core, you don't want to then cripple it with narrow data paths and tiny registers.

After a little more digging, I think Chagrin may be closest to the mark with his comment about fabrication processes.

Atmel's fastest processors do in fact have on-board ADCs, but what they don't have is PicoPower and high-voltage tolerance. Both of these features would require thick oxide layers, which in turn requires larger and slower transistors. Layer-thicknesses cannot easily be varied across a single chip, so if one part requires thick insulators (the IO ports, for example) the rest of the chip is condemned to use the same technology.

That would also explain why ATmegas need 5V to reach their maximum clock rate, whilst other designs can manage with 3.6V or 1.8V. As Tom mentioned there are faster 8-bit processors, and significantly these chips also require lower supply voltages.

  1. My guess, IC and CPU market is dictated by military lobby. It meas, that if you can manufacture 8-bit 50 GGz microCPU, you can't do it, at least you can't do it for profit. Despite regular market driven by profit/interest there is different rules. Simply, because politicians don't want "smart" weapon spread around the globe and threat world stability at first place, and secondly, there are much higher profit if you selling out "smart" bomb or rocket, than you selling 2$ chip. Who want to fight Taliban if they would have access to most sophisticated weaponry for dirty cheap price from cnina-kong?

  2. Radioactive susceptibility grows up with lowering size of transistors. GGz chip would have a lot of troubles to operate on board telecom/GPS satellite, and even regular airplane flight at 10 km is exposed to space radiation.

GGz

Did you mean GHz?

Radioactive susceptibility grows up with lowering size of transistors. GGz chip would have a lot of troubles to operate on board telecom/GPS satellite, and even regular airplane flight at 10 km is exposed to space radiation.

I am guessing that they are shielded anyway. Remember the gold foil on Apollo capsules?

Yes, GHz, typo.

Remember the gold foil on Apollo capsules?

Gold is good for IR or heat shielding. Radiation 'd require thick plate of lead, which is heavy and cost a pile of money to launch in space, where each gram is accounted.

Remember the gold foil on Apollo capsules?

Do you mean the gold-coloured kaptan over aluminium foil?

The foils on spacecraft are for thermal insulation. Since there's no air they only have to combat radiative-heating or cooling, and the best way to do this is with multiple layers of something light and reflective. Metalised mylar or polyimidie is commonly used. The same stuff is used to insulate ultra-low temperature cryostats.

The foils do nothing to shield against the high-energy radiation which can disturb electronics (and astronauts).

Right.
But the electronics is still shielded somehow, since it works out there.

tim7:
The foils do nothing to shield against the high-energy radiation which can disturb electronics (and astronauts).

Right, during Apollo the astronauts knew they would have minimal radiation protection while on the Moon. However, because they were only going to be on the moon for at most a few days and the missions were planned to avoid peroids of time with the highest solar radiation and incidence of flares (which have the potential to be fatal very quickly and the LEM and/or Apollo suits was no protection against); they decided traveling to the Moon was worth the significant increase to the risk of developing cancer or other health problems later in life. Also, most of them that wanted families had fathered children by then...

As for the electronics, radiation was a concern as well, but the Apollo vintage gear would generally be more innately rad tolerant than most modern equipment. This is because there are three things that tend to make electronics more intrinsically vulnerable to radiation; smaller feature size, lower signal voltages, and higher operational frequencies. While we tend to think of areas of "high radiation", radiation only interacts with substances on the atomic or sub atomic levels. Therefore the smaller the size of transistor gates the greater potential difference a stray ion passing through or embedded in it will make, and the higher the feature density the more potential damage a single ion can do. Similarly, the lower the logic level voltage the easier it is to erroneously change it. Finally, the higher the operational frequencies the more transient errors can affect the system in a given period of time.

florinc:
Right.
But the electronics is still shielded somehow, since it works out there.

See above. Also remember that "out there" is subjective. The vast majority of space hardware is in orbit around Earth low enough to be within the Earth's magnetic field. While it's subjected to more radition than it would be within the atmosphere, it's still significantly less than what's in or beyond the Van Allen belt. Deep space probes usually do have significantly more sheilding than satellites because of this fact.

Edit: And to be clear "sheilding" in the context of space probes is mostly placing as much of the less radiation sensitive parts of the spacecraft (e.g. structure, radiators, batteries, fuel, etc...) between what you are trying to shield an the outside of the craft as possible. Mass is always at a premium, the small the amount of material added just for radiation sheilding.

One more thing i don't know if it's already been mentioned but timing is an issue, did you ever see those squiggly lines on motherboards and graphics cards, they seem to make no sense but in reality they make sure that all the data lines are the same length and that data arrives on the bus at the same time, imagine if you had to cut all you jumper wires to the same length, if you didn't you'd have to wait until all the data lines are energised, waiting wastes time and so no point in having a fast processor,

I dont know much about atmega's but PIC's have a clock prescaler and there are advantages to having a slow clock in pic case's as low as 32khz because you dont need huge delay proceadures

just found this http://downloadsquad.switched.com/2009/07/20/how-powerful-was-the-apollo-11-computer/

For sure clock-skew could become problematic for high-speed I/O, even for serial data transfer. OTOH the I/O doesn't necessarily have to run at the same rate as the CPU, and in fact usually doesn't. Even on the ATmegas the interfaces are usually run far below 20MHz. The benefit of a faster CPU is lower latency, which is especially useful when multitasking or doing something slow like floating-point maths.

In the case of the ATmegas it's clear that the clock is limited by the electrical properties of CPU and not by any external considerations. If high clock-rates are problematic in any particular application the user is free to slow things down to his or her taste. And at the same time any user who needs to use the highest speeds can do so - on condition that he/she deals with the consequences that brings.

I found this interesting. The fastest CPU isn't always the best one for a particular job.
An iPhone 4S has four times the CPU power of Curiosity

John_S:
I found this interesting. The fastest CPU isn't always the best one for a particular job.
An iPhone 4S has four times the CPU power of Curiosity

That's an interesting link, thanks for sharing.:slight_smile: IMHO, it sums up the bottom line for both the threads original question and this minor digression; the vast majority of designs use what will work (and hopefully work well) for their intended purpose, not "technology for technology's sake".

The iPhone has a general purpose OS; I highly doubt the Curiosity has the same. When you're writing firmware for something like a space probe your code will be very specific to the task with each clock cycle carefully accounted for.

Every mcu manufacturer would be very happy to have a chance to produce 8bit microcontrollers @2GHz speed, indeed.. :slight_smile:
The maximum speed of a CMOS chip is mostly determined by the fabrication process. Popular indication of the fabrication process is a value in nm (nanometers) - this is the finest geometry detail they can draw/place on the chip (mask). Big cpu makers are today at 28-32nm, the limit for silicon technologies seems to be something around 14nm.
Atmel arduino like atmega chips are mostly 350nm process, some maybe 210nm. Microchip does 150nm with their pic32mx series, stm32f407 (discovery kit) are 90nm. The smallest the nanometers number the higher frequency the chip can run. The bigger the nm number the cheaper the process. So one chip to produce at 350nm is maybe 500x cheaper to produce than 32nm one. With 32nm you can go @10GHz, with 350nm maybe 100Mhz (ideal situation, not taking into account the chip schematics and wiring too much). The smaller the process the less power it takes at the same frequency. Therefore ie stm32f4 @168Mhz has got lower power than pic32mx795 @80Mhz.
If atmel decides to go 32nm with atmega328p, they can reach 2GHz easily, but it would cost them a LOT of money. MCU makers would tell you to sell their chips is very difficult today, so they are not too much keen on such technology moves.. :slight_smile:
Btw I have several 8pin chips in my junkbox running internaly @5GHz.. :slight_smile:

the smaller the nanometers number

...the lower the yield per wafer.

AWOL:

the smaller the nanometers number

...the lower the yield per wafer.

the smaller the nm number the higher number of dies (chips) on the same wafer (ie 350nm -> 32nm means 120x more chips from the same wafer size)..
BTW they talking about 450mm wafers today (not in atmel probably) - that is 150.000mm2 area, provided one atmega328p is 9mm2 (350nm fab) you get ~16.000chips, with ~40% yield ~7.000 chips from one wafer - not a bad business actually :slight_smile: