Analog input clock

After acquainting myself with arduino, i decided to study more about AVR and switched to avr c. I came across the clock pre scalar for ADC. I understand that it determines the freq of the clock fed to the ADC. But what is the significance of clock in an ADC and why do we need to use prescalars. Plus @ what prescalar does analogRead() run???

sreedevk:
After acquainting myself with arduino, i decided to study more about AVR and switched to avr c. I came across the clock pre scalar for ADC. I understand that it determines the freq of the clock fed to the ADC. But what is the significance of clock in an ADC and why do we need to use prescalars.

In theory, slower conversions are more immune to noise. In practice I’m not sure it makes much difference.

sreedevk:
Plus @ what prescalar does analogRead() run???

Serial.println(ADCSRA,HEX);

#if defined(ADCSRA)
	// set a2d prescale factor to 128
	// 16 MHz / 128 = 125 KHz, inside the desired 50-200 KHz range.
	// XXX: this will not work properly for other clock speeds, and
	// this code should use F_CPU to determine the prescale factor.
	sbi(ADCSRA, ADPS2);
	sbi(ADCSRA, ADPS1);
	sbi(ADCSRA, ADPS0);

	// enable a2d conversions
	sbi(ADCSRA, ADEN);
#endif

Slower conversions give more bits of precision. Datasheet, page 253:

By default, the successive approximation circuitry requires an input clock frequency between 50 kHz and 200 kHz to get maximum resolution. If a lower resolution than 10 bits is needed, the input clock frequency to the ADC can be higher than 200 kHz to get a higher sample rate.

The ADC module contains a prescaler, which generates an acceptable ADC clock frequency from any CPU frequency above 100 kHz. The prescaling is set by the ADPS bits in ADCSRA. The prescaler starts counting from the moment the ADC is switched on by setting the ADEN bit in ADCSRA. The prescaler keeps running for as long as the ADEN bit is set, and is continuously reset when ADEN is low.

If you are in a hurry use a lower prescaler (higher frequency) as a trade-off for lesser resolution.

The default prescaler takes 104µS to do a conversion:

 62.5 * 128 * 13 = 104000 nS

What does the function sbi() do???

Sets a bit. So in the example above the bits ADSP0, ADSP1 and ADSP2 will all be set (ie. the low-order bits of ADCSRA will be 0x07).

thanks for the info.

fungus:
In theory, slower conversions are more immune to noise. In practice I'm not sure it makes much difference.

in my experience it makes a HUGE difference. using lowest divider compared to highest can result in 6-7% error. even bumping up from lowest to next one up gives big improvement. and btw its not so much noise that causes trouble as settling time for the s&h.

oversampling is another way to get better readings. for most of my projects sampling 64x has big benefits . the 16 bit result is close to what one might get with much more expensive add-on converter chips. a side benefit is the upper byte is rock solid compared to a single reading. almost no penalty in code size (no multiply or other fancy math) and speed acceptable for most applications.