Unsteady voltage reading

Hi,
On A0 of my nano, I’m ejecting a steady 3.1V (no ripples) signal however, what I’m reading through the following code is a fluctuation of 2.73…2.76V.
How can I get an accurate reading?
TIA

const int pin = A0;  //=== steady 3.1V

void setup() {
  Serial.begin(9600);
}

void loop() {
  int sensorValue = 0;
  float voltage = 0.00;
  int x;

  for (x = 0; x <= 100; x++) {
    voltage += analogRead(pin);
  }

  voltage /= x; //=== take an avg of 100 readings

  voltage *= (5.0 / 1023.0); //2.73..2.76V

  Serial.println(voltage);  //

  delay(1000);
}

You sum 101 readings. Next you divide by 102.

Print x to serial port to check the value before the division to see it.

Ok, I took x out of the equation but still didn’t make a difference.

Even using actual Vcc as a reference, the readings don’t change much…

const int pin = A0;  //=== steady 3.1V
unsigned int ADCValue;
double V_In, voltage;
double Vcc;

void setup() {
  Serial.begin(9600);
}

void loop() {
  int sensorValue = 0;
  float voltage = 0.00;

  Vcc = readVcc() / 1000.0;
  
  V_In = analogRead(pin);

  voltage = (V_In / 1024.0) * Vcc; //2.74..2.76V

  Serial.print("Vcc = "); Serial.print(Vcc);
  Serial.print(" Vin = "); Serial.println(voltage);  //

  delay(1000);
}

long readVcc() {
  long result;
  // Read 1.1V reference against AVcc
  ADMUX = _BV(REFS0) | _BV(MUX3) | _BV(MUX2) | _BV(MUX1);
  delay(2); // Wait for Vref to settle
  ADCSRA |= _BV(ADSC); // Convert
  while (bit_is_set(ADCSRA, ADSC));
  result = ADCL;
  result |= ADCH << 8;
  result = 1125300L / result; // Back-calculate AVcc in mV
  return result;
}

Ok, I used a second volt meter to ensure a correct Vin which is produced by resistors placed as a voltage divider. The reading now is 2.65V. However, the output voltage via serial is 2.28V, a 0.37V difference which is a problem when reading a temperature sensor like a LM35/36.

Carefully measure the voltage on the AREF pin and enter it into the aRef variable, carefully measure the voltage on pin A0 while the program is running, if that is greatly (more than 5%) different from the serial monitor output, then you have either bad wiring, a bad meter, or a bad Arduino.
Here’s a test program:

// measure Vin with accurate meter.

const float aRef = 4.99; // measured with accurate meter
float volts;
int total;
void setup() {
  Serial.begin(9600);
}

void loop() {
  total = 0;
  for(int i = 0;i < 16;i++)
    total += analogRead(A0);
  total /= 16;
  volts = total * aRef / 1024;
  Serial.println(volts);
  delay(1000);
}

How do you know you're getting a steady voltage? What have you used to measure the ripple?

What circuit is hooked to the input? Where are you getting this voltage from?

Welcome to the world of Random Error.

You should understand all equipment comes with a natural distribution of readings around a mean.

You can increase the precision by taking more readings and an average. This reduces the Standard Deviation (the spread) but will not help Accuracy (closeness to the true value).

A reference source calibrated would be a way to calibrate any accuracy offset.

Taking more samples will help reduce the margin of error:

|500x472

I can only say this again...REMEMBER: Accuracy can not be fixed by taking more samples. Your device could always return 4.1V with 1000 readings...but unless you calibrate, the actual could be 4.3V

A known-calibrated voltage source would be required for this (and a calibration curve/LUT)

Thank you everyone.

Actually the readVcc() function was causing the problem.

Board: Nano Meter used: Fluke 87 Vcc read on 5V pin: 4.78V Vcc read by readVcc() sub: 4.59V Volts read on vref pin: 4.78V Volts read on pin A0 by code: 2.38V Volts read onA0 pin: 2.38V Voltage injected on pin AO thru 2 identical resistors @1% tolerance used as a voltage divider.

Johnny010:

Taking more measurements does not reduce the standard deviation. The standard deviation is what it is. Taking a lot of measurements can reduce the uncertainty, so that your measured mean is more tightly bounded to the actual mean. One of my favorite formulas is:

Ybar +/- t*sp/sqrt(r)

The 95% confidence limits around the mean (Ybar) are +/- t: a factor that depends on the degrees of freedom sp: the standard deviation of the sample r: number of values used.

t varies from 12.7 for 1 degree of freedom to a limit of 2.00 for df> 60, 30 gets you a t of 2.04.

KeithRB: Johnny010:

Taking more measurements does not reduce the standard deviation. The standard deviation is what it is.

So, lets say I roll a six-sided die from now until doomsday. I get an unending series of values that has a standard deviation that "is what it is".

If I roll dice 100 times and average, and do that from now until doomsday, I get an unending series of values that each will be very close to 3.5. If that isn't a change in the standard deviation, what is it?

Rolling one dice has no "standard deviation" because it is a uniform distribution.

It really depends on what you want to know. Averaging to reduce the error bounds of a measurement does not reduce the standard deviation of the population. While you have a lot of mean measurements with low error bounds, the "measurement" is still a uniform distribution with a mean 0f 3.5 and undefined standard deviation.

KeithRB: Johnny010:

Taking more measurements does not reduce the standard deviation. The standard deviation is what it is. Taking a lot of measurements can reduce the uncertainty, so that your measured mean is more tightly bounded to the actual mean. One of my favorite formulas is:

Sorry my bad!

But yeah, your confidence interval will become narrower as more samples are taken.

On a thought, is there a delay needed between sampling on an ADC to resolve any particular SD/random error issues?

PaulMurrayCbr: If I roll dice 100 times and average, and do that from now until doomsday, I get an unending series of values that each will be very close to 3.5. If that isn't a change in the standard deviation, what is it?

That's just tightening up your confidence interval for the mean value (I am 70% certain that the mean is within 10% of X value).

Totally different thing than standard deviation, it's like comparing an apple to a hammer.