 I didn’t want resurrect an old thread, so I made this new one.
I’m looking at this example:

``````const int NINPUTS = 4;
const byte inputPin[NINPUTS] = { A0, A4, A3, A5 };
for(int i = 0; i<NINPUTS; i++) {
}
}
``````

Can someone clarify what they meant by this:

aarg said:

It’s math. You’re taking 25% of the current reading, and adding 75% of the previous reading. If it were 50/50 it would be a standard average. Get it? Then rinse and repeat. The result becomes the previous reading for the next round.

``````const int NINPUTS = 2;
const byte inputPin[NINPUTS] = { A0, A4};
for(int i = 0; i<NINPUTS; i++) {
}
}
``````

Would the numbers in bold be any different depending on number of analog inputs chosen?
I’m not getting the reasoning behind why they chose .75, and .25.

The code doesn’t have an option to change the number of readings.
How would I embed this into the smoothing code example on the arduino site:

Would I have to make another array for?:

``````  // calculate the average:
``````

The number of inputs is defined by NINPUTS. The weighting of the reading is determined by the 0.75/0.25 and is independent of the number of inputs.

But why did they choose .75/.25 versus

``````average = total / numReadings;
``````

?

It's a false moving average. It is an approximation to a true moving average, but without needing a larger memory buffer.

It is doing digital filtering, NOT averaging. Though a more conventional way to do it would be:

``````smoothedReading[i ] += ( reading[i]- smoothedReading[i] ) * .25;
``````

The idea is to prevent noise spikes in the input signal from making it to the output signal without being attenuated. Though with a coefficient of 0.25, not a lot will be taking place. A more typical coefficient would be on the order of 0.1, if not much less. The smaller the coefficient is, the more any noise in the reading will be rejected, but it will also take longer for a truly correct, stable reading to be achieved, as the final value will be reached asymptotically.

Regards,
Ray L.