Why 1024 bits for arduino?

A video stated that the number of bits used by an arduino is 1023. Why? I have read that the reason is that 2 to the 10th power is 1024. Why is this the case for an arduino? Why not 2 to the 11th power? Should I not worry about understanding it and just take it on “faith”?

I understand that 0-1023 is 1024 bits .

for what

the uno has

  • 32K bytes of in-system self-programmable flash program memory
  • 1Kbytes EEPROM
  • 2Kbytes internal SRAM
  • ADC has 10 bits

the Mega
– 64K/128K/256KBytes of In-System Self-Programmable Flash
– 4Kbytes EEPROM
– 8Kbytes Internal SRAM

It's 10 bits

There is a lot of rubbish on the internet. Or you misunderstood what they were talking about. It's probably about the ADC and that is 10 bits.

The analog-to-digital converter on the Uno and other "basic" Arduinos is 10-bits. (It gets read into a 16-bit integer with the 4 most-significant bits unused and set to zero.)

The decision to put a 10-bit ADC in the chip was made by someone at Atmel. We don't know why...

It's actually an 8-bit processor but it has some 16-bit registers and the C++ language combines the 8-bit bytes so you can program with bigger numbers.

Everything inside the microcontroller (or computer) is binary (one's & zeros), but again the programming language takes care of any conversions. The "context" determines if a binary value represents a number, an alpha-numeric character, a processor instruction, a pixel in an image, etc. The software has to "know" the context of every binary value.

Binary counts like this:
0 binary = 0 decimal
1 binary = 1 decimal
10 binary = 2 decimal
11 binary = 3 decimal
100 binary = 4 decimal
etc...

With 10 bits you can count to
11 1111 1111 (note)
That's 1023 in decimal.

No, it's just 1024 different values or different possible zero & one combinations, including zero.

(note) The spaces don't exist in the computer. I added them for human readability and because the groups of 4 can be converted hexadecimal (base 16).

P.S.
The Windows Calculator in "programmer mode" can convert between binary, decimal, hex, and octal. Hex is "handy" because each group of 4 bits converts exactly to a hex value. It's easier (for humans) to read & write than binary and you can learn to convert numbers of any size between hex and binary in your head. Converting between decimal and binary isn't so easy. (Octal isn't used that much.)

You can do a LOT of programming without knowing anything about binary but with microcontroller programming one bit often represents the state of a switch, LED, or I/O pin, etc. And you have to be aware of the size of integers, longs, etc., so you don't "overflow" a value.

Does this make sense?

bits
00 0000000000_0
01 0000000001_1
02 0000000011_3
03 0000000111_7
04 0000001111_15
05 0000011111_31
06 0000111111_63
07 0001111111_127
08 0011111111_255
09 0111111111_511
10 1111111111_1023

1 Like

You forgot
00 0000000000_0

Tom... :smiley: :+1: :coffee: :australia:

My word! :grimacing:

No, 1 1/4 byte.. :grinning: :smiley: :grinning: :smiley:

Tom... :smiley: :+1: :coffee: :australia:
PS. I'm off to bed, 2:30am.. I need my "beauty sleep".... :sleeping: :sleeping: :sleeping: :sleeping:

At 12:30 PM? :grin:
I knew that!

Many thanks very clear

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.