# How to Increase the Resolution of analogRead()! (from 10-bits up to 21-bits)

Hello,

I just wrote a library to get up to 21-bits of precision (user-specified from 10 to 21 bits) when reading in analog voltages with the Arduino’s built-in ADC (Analog to Digital Converter). This is VERY useful when you need extra precision reading in analog sensors, so I thought I’d post a link to my library here.

Realistically, based on some limitations of oversampling, & the fact that a single 21-bit reading takes ~500 seconds (8.33 minutes–refer to the table at my link below), I’d stick to 16-bit resolution or less (where a single 16-bit reading takes only ~0.5 sec). However, I specify “up to 21-bit precision” simply because my library accepts a precision value of up to 21-bit without breaking. I’m not saying that the ADC will really give a reliable 21-bit result, I’m saying it won’t break the code. Having said that, I do think that 16-bit precision results returned from my library are valid, and I have watched and compared them to other commanded precisions while seeing them read a potentiometer. That is what I am doing in the example codes you can run that come with the library when you download it. I recommend you run those examples to see for yourself.

Additionally, my library permits data smoothing by sampling many times (a user-specified amount), then returning the average of all those samples, at a user-specified sampling precision.

Here it is: http://electricrcaircraftguy.blogspot.com/2014/05/using-arduino-unos-built-in-16-bit-adc.html

Happy coding! I hope you find this useful.
I love Arduino!

Sincerely,
Gabriel

Reference:

Your theory is completely misconceived.

The A/D sampling on the arduino chip can detect 0-5 volts in 1023 steps. In round numbers 5 mV per step.

If you feed 5 mV to the chip, you will get the a/d answer 1 If you feed 10 mV to the chip, you will get the a/d answer 2 and so on.

If you feed 7.5 mV to the chip, you will MAYBE get sometimes 1, and sometimes 2. And if you do this a bunch of times, and get 1 and 2 a bunch of times, then you can do your averaging and get an average outcome of 1.5, and then maybe you can conclude that the voltage is somewhere between 5 mV and 10 mV, so let's call it 7.5 mV.

So, maybe, you can get 1 extra bit of implied precision.

But here is where your theory falls down in a twisted wreck.

If your input signal is 6.25 millivolts, your theory would depend on getting the a/d result 1, 75% of the time, and the a/d result 2, 25% of the time. You could then, supposedly, calculate an "average" a/d outcome of 1.25, and then interpolate the result of 6.25 mV in between the 5 mV and 10 mV.

The problem is, the a/d conversion does not work that way. The response of the device to the input voltage, looks like a staircase. For some input voltage range, between 3 and 7 mV, you will get the answer 1. For some input voltage range between 8 and 12 mV, you will get the answer 2. Between 13 and 17 mV, you will get 3. And so on.

These are the flat parts of the staircase. In some small region - which might be between 7 and 8 mV, or it might not be, you will get an uncertain answer, either 1 or 2. That's where your concept might work, for about 1 extra bit. but that is the only place it will work.

For any constant input in the flat step regions, you will always get the same reason. If you are not close to the vertical part of the step ( and you don't know exactly where that is ), you won't get different measurement outcomes. If you take a bunch of measurements with a 5.1 mV input, you'll always get the outcome 1. If you repeat this measurement with 5.2 mV, you'll also get the outcome 1. And at 5.3 mV. And at 6.3 mV. And anywhere else, up to the transition region of unknown location and extent, where the device outcome will jump to 2.

You cannot distinguish a/d readings up to 21 bits of precision, as you ludicrously claim.

Search for this paper:- AVR121: Enhancing ADC resolution by oversampling

That shows the basis of what you are doing and shows you could expect up to 12 bits.

But to expect 21 bits is just silly. What testing have you done to confirm this? Is there monotonicity in the readings? Sure you might have numbers but how do you know they mean anything. Have you tested the input voltage as measured on something else with the readings you get?

The technique relies on there being noise on the input, the amount of noise and the spectral distribution will govern what extra resolution you can get from over sampling. Normally two extra bits is the most but under certain circumstances you could push it to three extra bits.

If there is noise, you will only expect to have maybe 6 or 7 bits of meaningful data in the sampled data. What "oversampling" will do, is give you maybe 2 bits of better resolution, which brings you back to about 9 or 10 bits, which is what you would have got in the first place if the actual signal wasn't corrupted by noise.

michinyon: If there is noise, you will only expect to have maybe 6 or 7 bits of meaningful data in the sampled data. What "oversampling" will do, is give you maybe 2 bits of better resolution, which brings you back to about 9 or 10 bits, which is what you would have got in the first place if the actual signal wasn't corrupted by noise.

Michinyon: What you are saying completely contradicts AVR121. I believe AVR121. They demonstrate, via plots, and signal reconstruction, resolution up to 16-bits on their 10-bit ADC. According to the paper, in order for oversampling to provide additional precision, "some noise has to be present in the signal, at least 1 LSB" (stated on pg. 13, and throughout the paper). Clearly you have not read the paper. Please take some time to do so.

_To all: my library is meant to be an easy-to-use implementation of oversampling, according to AVR121. It is not meant to be a magic box that corrects all data sampling problems or provides magical resolution to situations that don't comply with AVR121. However, computationally and mathematically speaking, the algorithm in my library is capable of implementing up to 21-bit precision. According to AVR121, so long as noise is sufficiently high, the signal is sufficiently stable (and the criteria as described in AVR121 are met), you can theoretically oversample up to any level of precision. My limitation to 21-bits is a mathematical limitation imposed by the unsigned long datatype in the microcontroller, which will overflow when summing the 10-bit readings, if >21-bit-equivalent readings are attempted, using my library. _

Can it really provide 21-bit precision? Yes. Is that really meaningful? If noise is sufficiently high, the signal is sufficiently stable, etc: YES. It really will be 21-bit precision. However, do not confuse precision with accuracy. I certainly am not claiming 21-bit accuracy. That would be ridiculous. Nevertheless, use your judgement.

Stop bickering and making accusations, and if you'd like to provide some meaningful content and contributions to this post, please download my library, do some testing yourself, and post some real, hard data with plots and explanations. You could compare plots of 16-bit, 21-bit, 12-bit, 10-bit, etc -sampled data. Also, be sure to study my table to make sure you understand the limitations of sampling frequency with these techniques, at the various resolutions I am stating. I would appreciate data, rather than conjectures. It took me long enough to implement the library, so I haven't had time to come up with data. Feel free to contribute.

I think I may end my discussion here, as it is not a productive discussion to babble with babblers who refuse to do the background reading and apply what is freely given.

PS. I have updated post 1 above.

Grumpy_Mike: A) What testing have you done to confirm this? B)Is there monotonicity in the readings? C) Sure you might have numbers but how do you know they mean anything. Have you tested the input voltage as measured on something else with the readings you get?

Hello Grumpy_Mike (nice name :))

To answer your questions (that I have labeled A through C above): A) What testing have I done? I read a pot's wiper during testing, to ensure consistent readings and ensure my algorithm was implemented correctly. B) I read up a bit on wikipedia here on monotonicity, and I believe there is not monotonicity in my readings, since they changed as expected when moving the pot. However, I'm not sure this fully answers your question. C) You mean have I verified accuracy? I did a quick multimeter-type check, and as compared to a standard 10-bit ADC reading, and it all checks out. Run my example code.

Ultimately, please just look at my detailed examples in the library, run them yourself, and try it out. I have good examples in the library.

As for my application (one of the main reasons I was excited about oversampling when I discovered it recently) is that I want greater precision for reading very low voltage drops across shunt resistors, to be used as current sensors. I fully intend to use my library to read the resistance across a 0.001 Ohm and 0.0005 Ohm resistor, for current measurements. However, I am keeping the voltage drop very low so that I can minimize heating and power loss, so this will require either an op-amp to boost the signal, or a high-resolution ADC (this is where my library comes in) to take the readings. I plan on trying my library on 10, 12, 14, and 16-bit resolution settings for this application, but I won't be able to get to it for quite some time, with all I am working on.

PS. If someone would like to compare the results of oversampling, via my library, to those of a real higher-precision ADC, please do! This would be the type of “hard data” we’d all really like to see, including myself. Adafruit has some nice ADCs you can very easily implement for just this purpose. Here they are:
1) https://www.adafruit.com/products/1083 - ADS1015 12-Bit ADC - 4 Channel with Programmable Gain Amplifier, \$10
2) https://www.adafruit.com/products/1085 - ADS1115 16-Bit ADC - 4 Channel with Programmable Gain Amplifier, \$15

I’d very much like to see the first one compared to my library at 12-bits, and the 2nd one compared to my library at 16-bits. The calls to my function would be done as follows:

``````//include the library

//instantiate an object of this library class; call it "adc"

//Global constants
const uint8_t pin = A0; //analogRead pin
//constants required to determine the voltage at the pin
const float MAX_READING_10_bit = 1023.0;
const float MAX_READING_11_bit = 2046.0;
const float MAX_READING_12_bit = 4092.0;
const float MAX_READING_13_bit = 8184.0;
const float MAX_READING_14_bit = 16368.0;
const float MAX_READING_15_bit = 32736.0;
const float MAX_READING_16_bit = 65472.0;
const float MAX_READING_17_bit = 130944.0;
const float MAX_READING_18_bit = 261888.0;
const float MAX_READING_19_bit = 523776.0;
const float MAX_READING_20_bit = 1047552.0;
const float MAX_READING_21_bit = 2095104.0;

void setup()
{
Serial.begin(115200);
}

void loop()
{
const uint8_t pin = A0;

unsigned int num_samples = 1;

//12-bit reading, 1 sample
uint8_t bits_of_precision = 12;
Serial.print("12-bit reading: "); Serial.print(analog_reading1); Serial.print(", V: "); Serial.println(V1);

//16-bit reading, 1 sample
bits_of_precision = 16;
Serial.print("16-bit reading: "); Serial.print(analog_reading2); Serial.print(", V: "); Serial.println(V2);
}
``````

It took me long enough to implement the library, so I haven't had time to come up with data.

But you are comfortable releasing it to the public, with claims that it implements up to 21 bits of precision?

As the application note AVR121 discusses, under some circumstances this procedure does not work at all.

There must be a certain amount of truly random noise for it to work, and the application note suggests one or more approaches to add such noise to the signal. The tests performed and described in the note are nowhere near exhaustive and do not include any that claim extend the precision beyond 16 bits. The final line of the note recommends to correct errors, according to application note AVR120. http://www.atmel.com/images/doc2559.pdf

jremington:

It took me long enough to implement the library, so I haven't had time to come up with data.

But you are comfortable releasing it to the public, with claims that it implements up to 21 bits of precision?

As the application note AVR121 discusses, under some circumstances this procedure does not work at all.

There must be a certain amount of truly random noise for it to work, and the application note suggests one or more approaches to add such noise to the signal. The tests performed and described in the note are nowhere near exhaustive and do not include any that claim extend the precision beyond 16 bits. The final line of the note recommends to correct errors, according to application note AVR120. http://www.atmel.com/images/doc2559.pdf

Absolutely I"m ready to release it! It's free. [update 21 Aug. 2015: I may require a fee for download now; according to the Free Software Foundation, however, even if I charge \$1Billion for it, it still meets their definition of "free" (http://www.gnu.org/philosophy/selling.en.html)]

analog_reading = adc.analogReadXXbit(pin,bits_of_precision,num_samples); -if you make "bits_of_precision" greater than 21, the library is guaranteed not to work. End of story.

Sorry panther3001, I didn’t read through your links and information thoroughly, but I’ll have to say that I’m siding with the engineer that wrote this: Precision, Accuracy, and Resolution

Of note is this: •Precision is the fineness to which an instrument can be read repeatably and reliably.
So without testing, I would say that your library could never achieve repeatable precision greater than what the resolution is for the device under test (ADC).

Sadly, a very large fraction of the human population believes that if a computer produces a number, then that number must be correct.

Hello panther3001

I think I've got a sense of the statistical process in the Atmel paper. Having just built a device with added outboard ADCs, I'll be a bit miffed if it turns out I could have done it in software :)

There are also limitations set out in that paper and your posts: the need for random noise, the extended sample period, the assumption that the signal under test does not vary during the sample period.

So I guess the proof of whether this is a practical technique will come through testing.

Personally, I'm willing to invest in an ADC and time to do some testing. I think it's important to define the test method and expected results in advance.

Maybe you could suggest what the testing should involve, and others can comment. I'll probably start with the MCP4725, since I've used it recently.

All the best

Ray

And just to clarify ... This technique is designed to improve the resolution, i.e. the ability to distinguish smaller changes in the signal under test? Not to improve the accuracy of the readings in absolute terms?

Oversampling of analog signals is popular in DSP / FPGA industry to improve the resolution of ADC conversion. In this case, oversampling achieves higher resolution than what would otherwise be possible. This here is a case study.

However, on the digital side of the ADC, there would be a designed resolution of (for example) 12-bit for the Arduino Due. Here, the SAM3X ADC has an ENOB that varies from 9.5 to 11.5 bits. I would think that the OP's library could tighten this range and push it closer to 12-bit (at the expense of increased measurement time).

Expensive and fast ADC chips have built-in oversampling features that can improve its effective number of bits at the cost of measurement speed. However, they never achieve better than their designed bit resolution.

I can see how this library could be useful in improving ENOB and providing more stable measurements.

B) I read up a bit on wikipedia here on monotonicity, and I believe there is not monotonicity in my readings, since they changed as expected when moving the pot. However, I’m not sure this fully answers your question.

No monotonicity is a good thing, if you do not have it it is bad.
Basically it means that a higher voltage will also produce a higher reading. If you do not have monotonicity then you can increase the voltage and the reading will drop.

In the AVR121 application note, among other conditions, there is a condition that needs to be true (for resolution enhancement) that hasn't been discussed. They describe the sampling frequency with respect to the Nyquist frequency which is OK, but I guess the assumption is that frequency of the signal to be measured is of a different clock domain than the sampling frequency and is not near or at any fundamental of the analog signal's frequency. If this is true, then the samples will "ride" along the input signal at unique points. So for this, I agree with their resolution enhancement results.

However, how about when the analog signal (DAC or other) is created from the same master clock as the one that's used for the ADC, where they are in the same clock domain? What if the frequencies are the same, or if the sampling frequency is at a fundamental of the input frequency? In these cases, the samples will not "ride" along the input signal as they will be at definite points in time (in sync). So for this, I think repeatable and reliable resolution enhancement is not possible.

Please don’t respond or make any comments until you have informed yourself about this concept by reading this paper, AND visited my first link above, having read that article on my website, AND downloaded & tested the library.

OK I have done all those.

I applied 1.461V to A0, through a 10 turn 1K wire wound helical pot wired between ground and the 3V3 supply.
The voltage was measured on my DVM on the 2V range. I edited the “full demo” software so it only took 16 bit samples. Then I left it running for about twenty minutes and transferred the resulting output to a CSV file and imported it into Excel to analyse.
The results were:-
328 Sample Count
1.447130469 Mean
1.44718351 Max
1.44703078 Min
0.00015273 Span
152.73 Span uV
76.29395 uV per sample for 16 biits

2.001862533 Samples span
So the 16 bit sampling returns values no better than 15 bit sampling.

Attached is the Excel file if you want to see it.

Oversampling 16.xlsx (50.3 KB)

Further more when I run the Ultra basic demo and apply 1V to the input and change the resolution I get the following readings for voltage:- 14 bits - 0.2468V 15 bits - 0.493V 16 bits - 0.98665 17 bits and above 0.0V All with the same input voltage. The only thing I changed was the number in this line:-

``````int bits_of_precision = 16; //must be a value between 10 and 21
``````

Also with the full demo on 16 bits I get zero out for a voltage of 0.011V in and anything below.

I have done a bit of analysis on the readings in Grumpy_Mike’s spreadsheet.

The 328 ADC readings cover 21 discrete values, ranging from 1.447030780 to 1.447183510. The gap between each value averages 7.630 uV. This is 10% of the “uV per sample for 16 bits” value in the spreadsheet.

The frequency distribution of the readings is shown in the attached graph. The mode is 1.447152990.

1.447030780 to 1.447183510

Given the single precision floating point operations that are available with standard C/C++ on the Arduino, only 6-7 digits of a number representation are meaningful. So that range may be better represented by 1.447031 to 1.447813

It would be of more interest to analyze the distribution of the integer results of the decimation, at various "precision" settings.