Once you have sampled your analog value, there is no going back to the original. You have to pick a guess that would have been interpreted as that value. So 1023 and 1024 are both leading to possible values
Your link is broken. I can't find my boilerplate explanation, someone else may want to tackle it.
The (usual) goal of division by the ADC precision, is to normalize the value to the range 0.0-1.0, essentially to make the value "virtual" and not depend on the precision. This value is then often multiplied by another scaling value that represents some physical range (like 0-5V).
The exact literal value produced by the ADC is always at the low end of a range of actual input values. For example, a 2 bit ADC will produce a "0" for any input between 0 and 1/4 of the full scale input. It will produce a "3" for any value between 0.75 and 1.0. However in treating the ADC result, it is important to remember what it actually represents - a range. Thus if we read "3" we shouldn't complain that it doesn't represent 1.0 just because we "know" that the input is 1.0. That happens when people try to scale for voltage and cry when it seems unable to represent the full scale value (e.g. 5.000V). There are four ranges in the input of a 2 bit ADC. Dividing the reading by the same number, 4, produces a valid index to the lower bound of the range that it registered. If we understand that a reading of 3/4 actually refers to the range 3/4 to 1, everything is good. However if we fall into the temptation of "correcting the error" of the upper range so it will produce the full range value, we have broken the system, as if we divide by 3, 3/3 does produce a 1, but inputs between 0.75 and 1 now scale outside (above) the input range. That is not an honest representation of the actual range that the input signal was contained in.
The four ranges of a 2 bit ADC correspond to four normalized ranges within the range 0-1 if, and only if, the result is divided by the exact number of ranges, which is 2^(bits of ADC precision). The same principle applies to an ADC of any precision.
It is true that the truncation error from quantization tempts a guess at the original value. But the "best guess" is one that uses the same scale factor for normalization, as it does for measurement.
Mine is that it does not matter. Both are OK, and zillions more options. As long as you fit within the interval leading to that sampled value, your guess is as good as any other. (you could look at minimizing the distribution error but then it's best to shoot for the center of the interval - and given the precision of the ADC and power supplies usually used... there is no point)
It reminds me of the "Can an airplane on a conveyor belt running in the opposite direction from the runway in front of the plane, take off?" controversy. Two camps, reams of confusing arguments from both sides. People sticking to their view no matter what. People saying, "it's obvious". Occasional conversions.
Never heard about the plane but seems like a relative velocity question to me
There might be some aerodynamics laws at play that I’m not aware of though
The ADC feels simpler. Once you have sampled, you lost information. There is no going back as a range of different analog values can lead to that sample. So it’s ok to have a range of formulas leading to a likely original value - ideally categorized with an error range.