The analog-to-digital converter on the Uno and other "basic" Arduinos is 10-bits. (It gets read into a 16-bit integer with the 4 most-significant bits unused and set to zero.)
The decision to put a 10-bit ADC in the chip was made by someone at Atmel. We don't know why...
It's actually an 8-bit processor but it has some 16-bit registers and the C++ language combines the 8-bit bytes so you can program with bigger numbers.
Everything inside the microcontroller (or computer) is binary (one's & zeros), but again the programming language takes care of any conversions. The "context" determines if a binary value represents a number, an alpha-numeric character, a processor instruction, a pixel in an image, etc. The software has to "know" the context of every binary value.
Binary counts like this:
0 binary = 0 decimal
1 binary = 1 decimal
10 binary = 2 decimal
11 binary = 3 decimal
100 binary = 4 decimal
etc...
With 10 bits you can count to
11 1111 1111 (note)
That's 1023 in decimal.
No, it's just 1024 different values or different possible zero & one combinations, including zero.
(note) The spaces don't exist in the computer. I added them for human readability and because the groups of 4 can be converted hexadecimal (base 16).
P.S.
The Windows Calculator in "programmer mode" can convert between binary, decimal, hex, and octal. Hex is "handy" because each group of 4 bits converts exactly to a hex value. It's easier (for humans) to read & write than binary and you can learn to convert numbers of any size between hex and binary in your head. Converting between decimal and binary isn't so easy. (Octal isn't used that much.)
You can do a LOT of programming without knowing anything about binary but with microcontroller programming one bit often represents the state of a switch, LED, or I/O pin, etc. And you have to be aware of the size of integers, longs, etc., so you don't "overflow" a value.