In practically all cases there are a number of types of chip in a series, one fully specified at the top of the range then the rest of the range filled with parts with a decreasing specification: Reduced Ram, Flash, number of peripherals etc. etc. For example there is the range ATtiny1614 .. ATtiny214 in the TinyAVR 14pin Series 1. The top of the range has 8 times more flash than the bottom of the range.
.
Is the price difference between the various devices in a range a pure marketing matter or is there some engineering reason for this price difference? For example, are cheaper dies used for the lower specification parts or could a part which say failed testing as an ATtiny814 still be used as an ATtiny214 assuming it tested OK for the latter's reduced feature set ? It is pure curiosity but I could imagine that, in a lot of cases, a common die is used and there is some late stage in the production process where a sort of one-time programmable operation is used to personalise the chip.
The main manufacturing cost of silicon chips is the amount of silicon area.
Memories take a lot of area - so have probably the biggest effect on cost.
More peripherals also takes more area.
This is common.
Testing time is another big factor in the manufacturing cost - so it's cheaper to just test 1 UART and call the chip a low-end variant than to test all the UARTs and call it the high-end one.
This is why you'll find many internet stories of people finding that a nominally "2K" chip actually has more memory!
There's also packaging: bigger packages are more expensive.
Volume is also a factor: economies of scale mean that a chip that's produced in high volume will be cheaper per unit than one that's only made in small lots.
No doubt marketing and supply/demand does also come into it.
This is the common practice for ALL solid state devices. No one could afford to pay for engineering for design of each device, nor afford multimillion production lines for each.
But, be assured the devices have all been tested before they are packaged.
I'd need to see some specific examples to be convinced of this, at least from Microchip etc. I've seen it in the case of Chinese clones of say the Stm32f103* (Bluepill) where you often get the highest specified device (max Flash etc.) but that is because the Chinese appear to have a completely different philosophy, that is always giving the maximum and not messing about with designing an "Enterprise Level" product then shooting a few holes in it and declaring it "Professional" then a few more holes and labelling it "Entry Level" etc.
What you are looking at is chip families. There is no fixed way they do it. Generally the family is fully defined before introduction, some times it grows and shrinks depending on the market. Many times the high end part will be out first then lower levels will be sorted out however it could also go the other way depending on fabrication and the results (yield). Also as there process improve they may do a die shrink which lowers the cost. Assume ajax cars buys a few million of the low end microprocessors it would in many cases be justifiable to redesign it with just what ajax wants. This will zap those that got sorted down chips, when they send the new die the old functions will no longer be present. It is important to follow the data sheet, there is a reason they sort them down. Many times the part will not make some specification for whatever reasons, a lot of times it is temperature.
This may be true of the so-called foundries where they design the entire process to be flexible so different chips may be processed without rebuilding their entire process line. But places like Intel build a plant for a specific processor and build the number that they want and then tear down the entire plant, including the buildings because modifying them would cost more than building new.
Or they went for economy of scale and produced all the same, keeps inventory simpler.
AVR's all have the same core. In an AVR family, only memory varies and the chip prices are not great at least they weren't at Digikey and Mouser!
Number of pins and chip size, DIP vs SMT made bigger differences.
ATmega168P wasn't near half the price of the ATmega328P (same family) with twice the RAM, flash, and EEPROM. The 1284P isn't a lot more than the next-down 644P. But buy a 40 pin 32A and compare that to a 328P with the same core and memory... a 6 pin ATtiny vs 14 pin = moar cost!
When these chips were new, I bet the differences were much bigger.
There is a lot of marketing in the price of a specific chip. This is coupled with a point no one has mentioned yet and that is the yield. In other words the number of good parts you get from a wafer.
This is down to a mixture of the chip's complexity and the mask resolution size. The trend is to push for ever smaller sizes but when you do the teals might be smaller because they are more sensitive to silicon crystal deformities.
OK. Let's take the ATmega168p as an example. Does this simply use exactly the same die as the ATmega328p and, at a late stage of the manufacturing process, say after packaging and final testing, is personalised to an ATmega168p using some sort of special one-time fuses? I guess there will anyway be a late configuration stage to set CPU signatures etc.
Neither do I, but I used to know someone who worked making chips years ago. It is possible that all the chips have all the memory on them and they make the connections during the bond out process. That is when they put gold wires onto the chip and connect them to the carrier package.
In the early days of IBM they used to charge a fortune for a speed upgrade to their punch card reader. In fact it just involved fitting the belt on a different set of pulleys, like you do to change the speed of a bench drill. However, if you did that yourself they would refuse to provide any more maintenance on your installation, so it was a sort of blackmail.
Sperry Rand obsoleted punch cards with mag tape. IBM and their EIA code, I worked in a place that had IBM VAR status and getting info on anything was a major pain for that group. They don't specialize on tech but rather on their ruthless sales force.
How are we supposed to know? I guess you could decap chips and compare die photomicrogrpahs, but even if you find two that are the same, or are different, that doesn't mean that the next two you check will mathc those. The vendor is free to sell dumbed-down 328 chips as 168, or not.
Clearly the question was a long shot on this platform but nevertheless brought out some interesting historical details. For me it was a matter of pure curiosity to understand the economics of maintaining one die per product or having one die per product range.
In the case of software, say office suites, it is of course common to have one distribution package which contains the full feature set and permit features to be opened up based on say a licence key. In the case of car manufactures of course you can't sell customers for example a "dumbed-down" Volkswagen Golf when they want a Volkswagen Polo and obviously these designs are different at every stage in the manufacturing process. I guess that the production of microprocessors comes somewhere in between the two extreme examples.
If the same die can be used then the "dumbed down" parts (within a package type) are a matter of marketing philosophy since the manufacturing process would be identical. But then the plethora of "dumbed down" versions of the parts pushes costs elsewhere, for example the distributors and users must maintain stocks of the things. Another philosophy would be to just produce the "top of the range" part in each package and maybe offer discounts to bulk users who could show that, in their end products, only a minimal feature set of the chips are used.
This is exactly the sort of question which interests me but without any precise knowledge of the processes in the "provisioning" of AVR MCUs we can only speculate. Especially with the devices which have a unique CPU signature, I can only imagine that there is some stage, late in the manufacturing process, where this is configured. I could easily imagine a sort of low level AVRDUDE which does a non-reversible one time configuration which could set the signature. But I could also imagine it initiating a test routine which could activate redundant memory blocks, peripherals etc. to compensate for some failure cases.
I used to work in the test world. For a certain manufacturer we tested SRAM ICs; their manufacturing process was so good that all tested ICs met the highest specs (100ns; always). However, we marked them based on demand; if the manufacturer's customer asked for 120ns chips, they were printed with that and if they asked for 150ns chips they were printed like that. But they were still 100ns chips.
And remember the days of the 486DX and 486SX micros; the latter had a disabled FP coprocessor. I still think that the 486SX is a 486DX with a faulty FP coprocessor; why not sell it as a cheaper version.
That would show up in the current consumption of the chip. You would expect a chip will less RAM to use less current. If it does not it might be that the full ram is being shipped. Although this could be nullified if the bond out process disables the RAM power as well as the RAM addressing.