Dividing by 1023 or 1024? The final verdict on analogRead

Budvar10:
ADC/1024 is correct, and ADC/1023 is wrong!

Let say we have just 1 bit ADC, 0 or 5V. Just two possible values. Should be /0 used?

And here is for n=5:

First *x/(n-1) math:                                                              

input voltage   ADC value   ADC*x/(n-1)   result   average error   output
4.00-4.99       4           *5/4          5.00     +0.50           5          
3.00-3.99       3           *5/4          3.75     +0.25           3
2.00-2.99       2           *5/4          2.50      0.00           2
1.00-1.99       1           *5/4          1.25     -0.25           1
0.00-0.99       0           *5/4          0.00     -0.50           0
Note that for the 5 possible ADC values, the output scale now has 6 values (0-5)
and the value 4 can never occur! And although the average error might look to be
nicely distributed +/- the centre of scale, the actual output result proves to be
much uglier. For slowly increasing voltage, the ADC would read; 0, 1, 2, 3, 5!!!
The maximum error in real world use is 2V, because that very tiny change of
3.999V - 4.001V would cause an output change of 3V -> 5V or a change of 2V!
Correct scaling math x/n:
input voltage   ADC value   ADC
x/n       result   average error   output  
4.00-4.99       4           *5/5          4.00     -0.50           4          
3.00-3.99       3           *5/5          3.00     -0.50           3          
2.00-2.99       2           *5/5          2.00     -0.50           2          
1.00-1.99       1           *5/5          1.00     -0.50           1          
0.00-0.99       0           *5/5          0.00     -0.50           0                                                                                            #
All output values are properly scaled and all are represented. The average error
is never greater than 0.5 (no more average error than the /(n-1) example),      
the average error is always one fixed value (-0.5) making it very easy
to compensate. The maximum error at any time is 1V, this is half the max error of
the /(n-1) example, which can introduce extra error up to 1 in high value ADC
readings

TolpuddleSartre:
Please, can someone put this thread out of its misery?
Oh, the humanity!
No, because 21 is 2, so by GoForSmoke's logic, we would divide by 1.

These are rather poor strawmen.

TS, how do YOU convert a single bit into voltage with any more accuracy than what you think to ridicule?
Oh? What? You can't?

Bud, if you measure only 5 steps, the CONVERSION error is no longer 3 orders of magnitude below instrument error.
The error diminishes at n(n-1) rate. I argue on practicalities that do not scale freely. Please at least stay in the ballpark!

I gave clear reasons that you don't address at all and instead erect these pre-broken strawmen.