Faster Analog Read?

I need to read an analog signal at about 50khz (for maybe 50-500 samples, not sure yet). The reference seems to indicate the analogRead() takes about 100us, which is good enough for 10khz. Is this limitation set by the Arduino libraries, or the ATMEGA chip?

Any help much appreciated,
Chris

P.S. does anyone know how fast the digital read is?

The analog read speed is a limitation of the ATMEGA chip. There are standalone ADC chips which are a lot faster, but I have no experience with them.

A digital read takes 1 instruction cycle. I think the ATMEGA chip has a 1 to 1 mapping from clock speed to instruction cycle, so a digital read would take 1/16th of a microsecond.

According to the specs:

By default, the successive approximation circuitry requires an input clock frequency [ADC clock] between 50 kHz and 200 kHz to get maximum resolution. If a lower resolution than 10 bits is needed, the input clock frequency to the ADC can be higher than 200 kHz to get a higher sample rate.

The ADC clock is 16 MHz divided by a prescale factor. The prescale is set to 128 (16MHz/128 = 125 KHz) in wiring.c. Since a conversion takes 13 ADC clocks, the sample rate is about 125KHz/13 or 9600 Hz.

So anyway, setting the prescale to, say, 16, would give a sample rate of 77 KHz. Not sure what kind of resolution you would get though!

EDIT:

Found this reference:

Here's a relevant quote:

The ADC accuracy also depends on the ADC clock. The recommended maximum ADC clock frequency is limited by the internal DAC in the conversion circuitry. For optimum performance, the ADC clock should not exceed 200 kHz. However, frequencies up to 1 MHz do not reduce the ADC resolution significantly.

Operating the ADC with frequencies greater than 1 MHz is not characterized.

So looks like using a prescale of 16 as above would give an ADC clock of 1 MHz and a sample rate of ~77KHz without much loss of resolution. BTW, this is the code to set the prescale to 16:

// defines for setting and clearing register bits
#ifndef cbi
#define cbi(sfr, bit) (_SFR_BYTE(sfr) &= ~_BV(bit))
#endif
#ifndef sbi
#define sbi(sfr, bit) (_SFR_BYTE(sfr) |= _BV(bit))
#endif

// set prescale to 16
sbi(ADCSRA,ADPS2) ;
cbi(ADCSRA,ADPS1) ;
cbi(ADCSRA,ADPS0) ;

Wow. Thank you jmknapp and oracle.

So if I just include your defines in the top of my code and the function calls in my setup, I should be good to go? As in, analogReads will just return a lot faster (at a bit lower precision)?

How exactly is your code setting it to a prescale of 16. I don't see any numbers in there..

Chris

Bear in mind that the Arduino digital and anlog read/write abstractions that make this platform so easy to use, do reduce the execution time over what could be achieved if one used low level code to directly access these functions. The arduino functions will be slower than the timings quoted above.

For example, the Arduino digitalRead function first does a lookup to convert the Arduino pin number to an actual port and pin. It then disables any PWM function that could be running on this pin. And finally, it executes another dozen instructions or so to actually read the port. I would think it would take well over twenty times longer for each digitalRead compared to directly accessing a specific low level port pin.

AnalogRead also does the Arduino pin mapping lookup and it sets the analog reference bits each time analogRead is called, although this probably represents a small fraction of the total ADC conversion time.

If your application does need digital read times under a microsecond, you can read more about direct port manipulation here: Arduino Reference - Arduino Reference

Wow. Thank you jmknapp and oracle.
So if I just include your defines in the top of my code and the function calls in my setup, I should be good to go? As in, analogReads will just return a lot faster (at a bit lower precision)?
Chris

Yes, analogRead() will just return faster if you set the prescale, I've tried it. Here's a little test program that shows the effect:

#define FASTADC 1

// defines for setting and clearing register bits
#ifndef cbi
#define cbi(sfr, bit) (_SFR_BYTE(sfr) &= ~_BV(bit))
#endif
#ifndef sbi
#define sbi(sfr, bit) (_SFR_BYTE(sfr) |= _BV(bit))
#endif

void setup() {
  int start ;
  int i ;
  
#if FASTADC
  // set prescale to 16
  sbi(ADCSRA,ADPS2) ;
  cbi(ADCSRA,ADPS1) ;
  cbi(ADCSRA,ADPS0) ;
#endif

  Serial.begin(9600) ;
  Serial.print("ADCTEST: ") ;
  start = millis() ;
  for (i = 0 ; i < 1000 ; i++)
    analogRead(0) ;
  Serial.print(millis() - start) ;
  Serial.println(" msec (1000 calls)") ;
}

void loop() {
}

As it stands above, with FASTADC defined as 1, the 1000 calls to analogRead() take 16 msec (16 microseconds per call). With FASTADC defined as 0, the default prescale gives 111 microseconds per call.

Personally I've been looking into ways to make ADC processing less expensive in CPU cycles, to avoid calling analogRead() which burns over 100 microseconds sitting in a loop waiting for the conversion to finish. Turns out there's a mode where the conversion finishing can generate an interrupt, so that's the route I'm going, so the processor can be doing other things during the conversion (as long as the result is not needed right away!).

As for how the above code relates to a prescale value of 16, that's from a table in the Atmega data sheet:

Do you guys have a sense of how much the resolution or accuracy of the inputs vary as the speed increases? I'm happy to decrease the default prescale factor if we still get accurate readings. (I think it's better in most cases to be accurate and a bit slow than the other way around.) Any thoughts on which value provides the best balance of accuracy and speed?

Do you guys have a sense of how much the resolution or accuracy of the inputs vary as the speed increases? I'm happy to decrease the default prescale factor if we still get accurate readings. (I think it's better in most cases to be accurate and a bit slow than the other way around.) Any thoughts on which value provides the best balance of accuracy and speed?

If the tradeoff isn't too bad, you can add a new function, something like analogReadFast, and leave the existing functionality unchanged.

Do you guys have a sense of how much the resolution or accuracy of the inputs vary as the speed increases? I'm happy to decrease the default prescale factor if we still get accurate readings. (I think it's better in most cases to be accurate and a bit slow than the other way around.) Any thoughts on which value provides the best balance of accuracy and speed?

I don't know how much the accuracy drops if you exceed the recommended rates in the datasheet, but I would think it would not be good practice to add functionality to the standard Arduino platform that clocked the ADC faster then the manufactures recommendation. And if my math is correct, at 16mhz, a prescale of 64 or less will exceed the 200khz max ADC clock rate stated in the datasheet.

Perhaps someone who has the time to explore and document the affect on accuracy could post some information and code in the playground.

Oracle, did you have a chance to do any tests on accuracy when you ran the speed tests?

Oracle, did you have a chance to do any tests on accuracy when you ran the speed tests?

When I was doing the speed tests, I didn't know there was a prescaler for the ADC speed. I thought it was fixed for the chip. I didn't know running speed or accuracy tests on that part was possible.

It probably wouldn't have mattered to me since I knew my bottleneck was the LCD. And my project is working perfectly now with the timer interrupt library from playground, so I didn't actually need to speed up my loop.

I don't know how much the accuracy drops if you exceed the recommended rates in the datasheet, but I would think it would not be good practice to add functionality to the standard Arduino platform that clocked the ADC faster then the manufactures recommendation. And if my math is correct, at 16mhz, a prescale of 64 or less will exceed the 200khz max ADC clock rate stated in the datasheet.

Still, the Atmel document referred to above (http://www.atmel.com/dyn/resources/prod_documents/DOC2559.PDF) says:

The ADC accuracy also depends on the ADC clock. The recommended maximum ADC clock frequency is limited by the internal DAC in the conversion circuitry. For optimum performance, the ADC clock should not exceed 200 kHz. However, frequencies up to 1 MHz do not reduce the ADC resolution significantly. Operating the ADC with frequencies greater than 1 MHz is not characterized.

When using single-ended mode, the ADC bandwidth is limited by the ADC clock speed. Since one conversion takes 13 ADC clock cycles, a maximum ADC clock of 1 MHz means approximately 77k samples per second. This limits the bandwidth in single-ended mode to 38.5 kHz, according to the Nyquist sampling theorem.

To me, that's as good as a green light, direct from the manufacturer, to go to 1 MHz and a 77k sample rate--with no significant effect on resolution.

I wonder if the use of the term resolution rather than accuracy is significant. That note discusses calibrating an ADC, it is not clear what errors an uncalibrated ADC will have if used outside the recommended clock speeds. It could well be no problem in doing so, but I would not take that app note as a green light to exceed the specs in the datasheet without some real world testing.

I wonder if the use of the term resolution rather than accuracy is significant. That note discusses calibrating an ADC, it is not clear what errors an uncalibrated ADC will have if used outside the recommended clock speeds.

But the note says that calibration applies only to differential ADC mode:

For most applications, the ADC needs no calibration when using single ended conversion. The typical accuracy is 1-2 LSB, and it is often neither necessary nor practical to calibrate for better accuracies.

However, when using differential conversion the situation changes, especially with high gain settings.

I believe the note uses accuracy/resolution interchangeably:

The AVR uses the test fixture's high accuracy DAC (e.g. 16-bit resolution) to generate input voltages to the calibration algorithm.

There are standalone ADC chips which are a lot faster, but I have no experience with them.

Does anyone have experience with ADC chips?

Also, would using a multiplexer speed up analog reads?

I have used the MUX shield to read 16 analog pots, one every 16th cycle of the main loop (which also counts through the MUX output), previously I had used separate analog inputs, each also on a main loop based counter.

Only taking one reading per main loop cycle made a huge difference, and this ought to help it even more.

edit:
Ok i got home and tried it, My ISR which generates sound uses about 40% of the processing time, and I have a lot of stuff going on in my main loop and there i one analogread per loop(different pots through MUX)

test bit was cycling at:
460hz
changed 3 analogwrites to OCRxx and some digitalwrites to direct bitwrite to the ports., the speed improved to
480hz
then i did this tweak and the speed improved to
530hz
so even with all the other things happening in my loop
(quite a lot, including a lot of "map" and other multiplication and division) thats quite a nice improvement!

edit:
oh yes i do not notice any difference in the accuracy of the potentiometers as far as i can tell i cant hear any wavering of the controls when set the filter frequency or whatever.

I know this is an old thread, but I want to thank anyone that is still around for contributing. I ran the duemilanova using a prescale of 16 (thanks jmknapp!), and performed an analogRead on a 1K Hertz signal. Here is the result (100 samples of 1000 Hz sine wave):

Gives a 56K sample rate! Next up, prescale to 16, use a 20 MHz crystal, and rely on i2c to move the data. Were very close to a very usable $30 DAQ! Thanks guys.

Just for comparison, here are the same 100 samples of a 1000 Hz sine wave using the normal prescale of 128 (it automatically resets itself back to 128 when the arduino reboots/resets).

Here's the code ...

/*
  Analog Input with prescale change
  Reading a 1 kHz sine wave, 0 to 5 volts
  Using analog 0
  Results stored in memory for highest speed
  using code from:
  http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1208715493/11
  with special thanks to jmknapp
 */

#define FASTADC 1
// defines for setting and clearing register bits
#ifndef cbi
#define cbi(sfr, bit) (_SFR_BYTE(sfr) &= ~_BV(bit))
#endif
#ifndef sbi
#define sbi(sfr, bit) (_SFR_BYTE(sfr) |= _BV(bit))
#endif

int value[100];   // variable to store the value coming from the sensor
int i=0;

void setup() 
{
   Serial.begin(9600) ;
  int start ;
  int i ;
  
#if FASTADC
  // set prescale to 16
  sbi(ADCSRA,ADPS2) ;
  cbi(ADCSRA,ADPS1) ;
  cbi(ADCSRA,ADPS0) ;
#endif
}

void loop() 
{ 
 for (i=0;i<100;i++)
{
  value[i]=analogRead(0);
} 
for (i=0;i<100;i++)
{
  Serial.println(value[i]);
} 
Serial.println();
Serial.println();
Serial.println();
delay(5000);
  
}

You'll (hopefully) find continuing progress at:

Thanks to everyone on this thread, I'm getting my first Arduino in a couple of days, and I need read 32 analog inputs several hundred times quickly. The prescale factor of 16 will make this possible. I will try to post how this turns out when/if I get it working.

I know this is an old thread but I'm looking at speeding up the sampling time for Analogreads also and have a couple of questions. The datasheet talks about increasing the ADC clock to 1Mhz but it may have an impact to resolution. The datasheet eludes to the fact the reason behind using the default 200kHz clock is because it is optimized around the connected voltage source having an output impedance of 10k or less. Is it reasonable to assume that you might be able to maintain the same resolution at higher speeds if the output impedance of the voltage source is significantly lower? Is the resolution loss a function of capacitance charge time in the Sample and Hold circuitry? I'm going to be connecting a MAX4372 to the Arduino and it has an output impedance of 1.5 ohms so I'm thinking I should be OK with increasing the frequency to a higher speed. Anyone have any thoughts on this matter? Thanks.

Hi !
I'm very interested in fast analog ports on arduino too.
So i tried some speed tests on this instruction v=analogread(pin); (v and pin ar ints).

Prescaler Maximum sampling frequency
16 62.5 kHz
32 33.2 kHz
64 17.8 kHz
128 8.9 kHz

To get the maximum precision, i would take just the sampling speed i need, not more. I didnt test at prescaler lower than 16 because of the datasheet 1MHz limit for the DAC speed.

Is it possible to get an analog result faster than 13 ADC clocks, say if less resolution is required? Like, maybe 10 or 11 clocks for 8 bit accuracy?

Also, it appears the 1MHz ADC clock rate "limit" is a soft limit. The spec sheet says that faster ADC clock rates haven't been characterized, but it doesn't say it won't work at all.

I want to push this to the limit because I intend to try to get at least 8 analog signals (16 if I go for the MEGA) streaming into my computer as fast as possible... hopefully 10kHz each. May take some low-level picking around (especially to stream it reliably over USB... that will be a challenge!), but this is my goal.

Does Arduino have separate sample-and-hold circuitry for each analog input pin, allowing simultaneous sampling of all the analog input pins? This would be really nice (though not absolutely required).