ADC/1024 is correct, and ADC/1023 is wrong!

Let say we have just 1 bit ADC, 0 or 5V. Just two possible values. Should be /0 used?

And here is for n=5:

First *x/(n-1) math:

input voltage ADC value ADC*x/(n-1) result average error output

4.00-4.99 4 *5/4 5.00 +0.50 5

3.00-3.99 3 *5/4 3.75 +0.25 3

2.00-2.99 2 *5/4 2.50 0.00 2

1.00-1.99 1 *5/4 1.25 -0.25 1

0.00-0.99 0 *5/4 0.00 -0.50 0

Note that for the 5 possible ADC values, the output scale now has 6 values (0-5)

and the value 4 can never occur! And although the average error might look to be

nicely distributed +/- the centre of scale, the actual output result proves to be

much uglier. For slowly increasing voltage, the ADC would read; 0, 1, 2, 3, 5!!!

The maximum error in real world use is 2V, because that very tiny change of

3.999V - 4.001V would cause an output change of 3V -> 5V or a change of 2V!

Correct scaling math *x/n:

input voltage ADC value ADC*x/n result average error output

4.00-4.99 4 *5/5 4.00 -0.50 4

3.00-3.99 3 *5/5 3.00 -0.50 3

2.00-2.99 2 *5/5 2.00 -0.50 2

1.00-1.99 1 *5/5 1.00 -0.50 1

0.00-0.99 0 *5/5 0.00 -0.50 0 #

All output values are properly scaled and all are represented. The average error

is never greater than 0.5 (no more average error than the /(n-1) example),

the average error is always one fixed value (-0.5) making it very easy

to compensate. The maximum error at any time is 1V, this is half the max error of

the /(n-1) example, which can introduce extra error up to 1 in high value ADC

readings