As an engineer I would respond that 1/1024 (approx .005V full range) of a (probably fluctuating) voltage makes a difference, then you should be using more than 10 bits for the analog conversion. The rest is academic. ![]()
What ever makes you feel good, I mean that seriously.
Whats 5mV among friends, so we miss Uranus and fly by Pluto. ![]()
Probably the transducer can never really produce an accurate 0-5V and then the sampling resistor is 5% and the whole thing isn't linear anyway.
Oh, and then there is 60-50 Hz superimposed on the signal.
"Or tell me that I'm off my rocker"
Don't fall off your rocker, that hurts too much.
.
1024 (the number of possible voltage values) is correct and 1023 (the number of possible values-1) is not.
To see why, instead of a 10-bit ADC, consider a 1-bit ADC (2 possible voltage values). The possible voltages are either in the range 0-2.5 V or 2.5-5 V, the interval being 5V/2.
For me, it's about applying a correct understanding that will almost always work because it is correct for almost all cases. It's just easier that way in the long run. 1024 is the right answer, it might not matter here but why get used to using a wrong answer and get incorrect results later down the road?
A 1 bit ADC makes sense the way I explained it as well.
Not to me. Sorry.
Where do you get the 2.5? You have to divide 5 by the number of intervals, which is 2.
Funny, I was thinking about the same thing last week. My guts tell me 1024, the analysis tells me 1023.
If you divide by 1024 and you have a 5V input signal and a 5V reference voltage, you will never get a 5V reading as the max reading is 1023.
(1023/1024) * 5V = 4.995V
Still confused, though.
Now, how did Delta_G get in that data sheet excerpt? (reply #11)
If the ADC were only two bits instead of 10, it would divide the total voltage into four equal parts. But if we follow the logic of dividing by 1023 for a 10-bit ADC then for a 2-bit ADC we would divide by 3.
/4 /3
0 0 0
1 .25 .33
2 .5 .66
3 .75 1.0
Dividing by 3 means that we get zero at the low end of the scale and 1 at the top end but the voltage difference between successive ADC counts is 1/3 which isn't correct.
If you want the highest count to give you full-scale, just change the scale to this:
/4
0 .25
1 .5
2 .75
3 1.0
For a 10-bit ADC this means that you would use V = (ADC + 1)/1024. But to get the correct difference between successive counts you can't have zero at one end and 1 at the other. You have to pick one or the other.
So, no, 1023 is definitely wrong. But, as has been pointed out, for a 5V 10-bit ADC the error is very small.
Pete
P.S. using a one-bit ADC as an example doesn't make sense. The correct divisor for an N bit ADC is 2^N, the "10-bit divisor is 1023" rule uses (2^N)-1 which for a one-bit ADC would require us to use zero as the divisor [+edit WRONG! Man did I blow that one! See messages #17 and #24 below].
Pete
I don't know if this was partly inspired by my (incorrect) comment in another thread but I really like your logic Delta_G.
I was quite perplexed by the fact that the manufacturer themselves say 1024 but if you look at it as a range instead of a value it makes more sense.
(1023/1024) * 5V = 4.995V
Still confused, though.
Correct. I think like to think about this result as follows: the actual voltage corresponding to "1023" is somewhere between 4.995V and 5.0V.
Similarly, for a 1 bit ADC, the voltage corresponding to "1" would be between 2.5V and 5V, and "0" being between 0.0 and 2.5V. The 2.5V comes from 5.0V divided by the number of intervals, not the number of intervals-1.
Which, incidentally, is how a digital port pin, a 1-bit ADC, works.
Now I have to go back to the data sheet and remove:
"Note, Delta_G can use 1023."
Geeeee.
.
Delta_G:
So if we have a reading of 1 then we have at least 2.5 volts but not more than 5.
Half correct. If we have a reading of 1 then we have at least 2.5 volts. Past that is unreachable by a successive approximation converter.
Which also explains why the correct divisor is 1024. There is no way to tell the difference between "almost maximum" and "maximum". The last little bit is unreachable. With a divisor of 1024 your expression never returns 5.0 as expected.
In other words, testing for the maximum reference voltage and successive approximation converter are mutually exclusive.
Um, (2 ^ 1) -1 == 1
Indeed! Just as I was going to sleep I realized that mistake, but it was too late to get up to fix it.
Pete
This looks like a good reference...
https://www.maximintegrated.com/en/app-notes/index.mvp/id/1080
Navigate to figure 3. The DAC voltage associated with the most-signifiant bit (MSB) is ½Vref. The next bit is ½ * the voltage associated with MSB or ½(½Vref). That continues for all the bits of our converter's DAC.
Imagine we have just two bits. That means the maximum voltage the DAC can output is ½Vref + ½(½Vref) or ¾Vref.
So, when comparing Vin to the DAC output voltage the last comparison is against ¾Vref. The maximum output of our converter (0b11) means "Vin is greater than ¾Vref". There is physically no way to make any comparisons beyond that.
Make sense?
You are welcome.
I miss that cartoon.
And, just to beat that dead horse a bit more ... The correct divisor is neither 1023 nor 1024. The correct way to use such converters is with calibration. ![]()

