# 'Smoothing' example yields negative numbers with greater numReadings

Hey All,

I have some issues with the smoothing algorithm used in the the examples page. Once the size of the array gets bigger than around 35, the average becomes a (seemingly random) negative number. I haven’t seen a number lower than -850, so I assume it may be constrained to a range of -1023 to 0

I’m simply reading resistance off a probe hanging off A0. With smaller arrays I get what you would expect; a range of values of 0 to 1023.

I increase numReadings to 100 and something interesting happens. When I separate the probes, which should give me a reading of +1023 (high resistance), in this case I got -287 when it should have been at +1023. This is what happens in the monitor in slow motion during the transition;

• The averages increased from +0 to +327 where they suddenly jumped to -317 and counted up to (and through) zero again getting into positive territory.

It keeps counting upwards in positive numbers until it hits +326.
+326 is repeated, then the data stream flips over into negative numbers again going straight to -318 (which it repeats 5 times), then going to -308 (which it repeats twice), before settling on -287 for the long haul.

## It makes me worry that there is an issue with the algorithm itself as it should be totally scalable, based on the code.

Also, if I wanted smoothing over a larger number of input reads (say 100), is there any reason why I couldn’t use the default 10 reads, average them, then put *those results into another array and average 10 of the averages? Similar result, but using only 20% of the memory usage that a 100 read average would.

I’m using a Uno, BTW.

``````/*

Smoothing

Reads repeatedly from an analog input, calculating a running average
and printing it to the computer.  Keeps ten readings in an array and
continually averages them.

The circuit:
* Analog sensor (potentiometer will do) attached to analog input 0

Created 22 April 2007
By David A. Mellis  <dam@mellis.org>
modified 9 Apr 2012
by Tom Igoe
http://www.arduino.cc/en/Tutorial/Smoothing

This example code is in the public domain.

*/

// Define the number of samples to keep track of.  The higher the number,
// the more the readings will be smoothed, but the slower the output will
// respond to the input.  Using a constant rather than a normal variable lets
// use this value to determine the size of the readings array.
const int numReadings = 100;

int readings[numReadings];      // the readings from the analog input
int index = 0;                  // the index of the current reading
int total = 0;                  // the running total
int average = 0;                // the average

int inputPin = A0;

void setup()
{
// initialize serial communication with computer:
Serial.begin(9600);
// initialize all the readings to 0:
for (int thisReading = 0; thisReading < numReadings; thisReading++)
readings[thisReading] = 0;
}

void loop() {
// subtract the last reading:
total= total - readings[index];
// read from the sensor:
readings[index] = analogRead(inputPin);
// add the reading to the total:
total= total + readings[index];
// advance to the next position in the array:
index = index + 1;

// if we're at the end of the array...
if (index >= numReadings)
// ...wrap around to the beginning:
index = 0;

// calculate the average:
average = total / numReadings;
// send it to the computer as ASCII digits
Serial.println(average);
delay(1);        // delay in between reads for stability
}
``````

Thanks for any help!
Markus

If you sum a lot of values into a sixteen bit “int”, this is what happens.
Use “long”.

Ah, I see, newbie error. It's not the output value, it's the summing in-between. So if the number of values in the array add up to more than 32,767 Arduino will wrap the summing.

So theoretically given the maximum values stored in any one integer in the array are 1023, the maximum number of readings that can be stored in an array without running into this averaging problem is 32 (32,767 / 1023). Right?

Cheers and thanks
Markus

Your last bit makes no sense. The first bit is right.

I don't think it's quite right to say that the Arduino will "wrap the summing". What really happens is that when you exceed the value 32767 the most significant bit in the integer is set. If it is defined as an "int" the code will treat the MSB as an indication that this is a negative number. However if it is defined as an "unsigned int" it will use all 16 bits as a positive number.

If your calculations exceed 65535 there should be a 17th bit which an "int" doesn't have so that piece of data is just lost. Nothing gets wrapped. If you need values in excess of what an int can hold use a "long" or "unsigned long".

...R

Just saying since the array is storing reads from analog pin 0, any one read can only be a max of 1023 in value. The problem experienced above occurs when all those readings added together add up to more than an integer can hold (around 32,000).

Therefore if you’re averaging an array of integers read from the analog pins, your array can only be a maximum of 32 values in size. Any more than that and you run the risk that when you add them all together the total (1023*32) may be more than the averaging integer can contain.

An adaptive averaging algorithm can look ahead for overflow. (OK one can discuss the usefulness)

with very low values it can sum hundreds of samples, giving more significant digits when averaging.
and with high values you already have more significant digits.
Note: 4x more samples gives 1 bit extra .
4 → 11 bit
16-> 12 bit
64 → 13 bit
256 → 14 bit
==> 100 or more samples will give one decimal digit extra,
e.g. if the samples are all in the range 0…9 one can easily sum 100+ ==> one can calculate a significant extra digit.

The code below is a proof of concept, it does not calculate the extra bits (yet)

``````//
//    FILE: adaptiveAverage.ino
//  AUTHOR: Rob Tillaart
// VERSION: 0.1.00
// PURPOSE: demo
//    DATE: 2013-12-18
//     URL:
//
// Released to the public domain
//

#include <hashmap.h>

void setup()
{
Serial.begin(115200);
Serial.println("Start ");
}

void loop()
{
unsigned int sum = 0;
unsigned int samples = 0;
while ( (sum < (65535-1023)) && (samples < 1000) )   // before overflow of sum  and  before max nr of samples.
{
sum += analogRead(A0);
samples++;
}
Serial.println(sum);
Serial.println(samples);
Serial.println(1.0*sum/samples);
Serial.println();
delay(1000);
}
``````

output sampling a floating A0 line

``````Start
64701
169
382.85

64717
188
344.24

64616
216
299.15

64718
224
288.92

64520
229
281.75

64579
245
263.59

64713
238
271.90

64650
251
257.57

64538
243
265.59

64672
252
256.63

64584
250
258.34

64689
249
259.80

64587
256
252.29

64749
244
265.36

64705
260
248.87

64718
242
267.43

64681
263                <<<<<<<<<< high nr samples
245.94           <<<<<<<<<< lower average

64661
252
256.59

//////// here I touch the arduino board with my hand.

64527
64                <<<<<<< lower nr of samples
1008.23       <<<<<<<<<<< higher average

64854
80
810.67

64876
84
772.33

///////// and let it go again.
64539
285
226.45

64575
107
603.50

64842
122
531.49

64921
143
453.99

64744
165
392.39

64775
188
344.55

64546
206
313.33

64744
218
296.99
``````

updated - prints extra digit if enough samples are made…

``````//    FILE: adaptiveAverage.ino
//  AUTHOR: Rob Tillaart
// VERSION: 0.1.01
// PURPOSE: demo
//    DATE: 2013-12-18
//     URL:
//
// Released to the public domain

void setup()
{
Serial.begin(115200);
Serial.println("Start ");
}

void loop()
{
unsigned int sum = 0;
unsigned int samples = 0;
while ((sum < (65535-1023)) && (samples < 1000))
{
sum += analogRead(A0);
samples++;
}
Serial.println(sum);
Serial.println(samples);
if (samples < 100) Serial.println(1.0*sum/samples, 0);       // no extra digits
else Serial.println(1.0*sum/samples, 1);                              // one extra digit
Serial.println();
delay(1000);
}
``````

works well, but given that one sample takes 125 uSec, 1000 samples is quite some time.

There is usually no benefit to be gained by "smoothing" more than a few ( 10 or less ) readings.

agree, smoothing with 16 values is the most I use in practice - makes a nice shift 4 - or 2 extra bits.
For signal processing one often wants every individual sample to be able to extract "high" frequencies that get lost due to smoothing.

Thats genius, Rob!

``````  if (samples < 100) Serial.println(1.0*sum/samples, 0);       // no extra digits
else Serial.println(1.0*sum/samples, 1);                              // one extra digit
``````

I wonder if this is smaller / faster / better in any way…

``````  if (samples < 100) Serial.println( (float) sum / samples, 0);       // no extra digits
else Serial.println( (float)  sum / samples, 1);                              // one extra digit
``````

You can also use digital filtering to get a smoothed value without needing the array.

The classic first-order low pass filter is:

``````float smoothed = 0.0 ;

void handle_sample ()
{
float val = (float) analogRead (..) ;
smoothed += 0.01 * (val - smoothed) ;  // for a time constant of 100 samples or so
}
``````

So although this uses floats it only requires a few operations per sample to
give smoothing over almost any timescale you want (use 0.1 for ten samples or so,
0.001 for a thousand...).

It can be done using fixed-point or integer arithmetic but you have to worry about
rounding and suchlike.

... 0.01 ... for a time constant of 100 samples or so

According to Excel, for an alpha of 0.01 an asymptote less than 1% is 531 samples. For an alpha of 0.01 at 100 samples the smoothed value is still 38% from the asymptote.

An alpha of 0.05 gets the history very close to 100 samples.

For first order systems the time constant is the time to drop by a factor of e (base of
natural logarithms).

Therefore if you’re averaging an array of integers read from the analog pins, your array can only be a maximum of 32 values in size. Any more than that and you run the risk that when you add them all together the total (1023*32) may be more than the averaging integer can contain.

I realize you have been shown better ways, but for future reference, there is a way to do an average of ints that sum to larger than int.

``````unsigned int values[] = { 32000, 31000, 30500, 29600, 31640 };
unsigned long avg;

void setup() {
Serial.begin (115200);

for (int i = 0; i < 5;  i++) {
avg += values[i];    // note that you don't even need a cast to unsigned long
Serial.println(avg);
}
avg /= 5;
Serial.println(avg);
}

void loop() {
}
``````

My apologies. I was seeing the world as a hammer (as in, everything looks like a nail). In my dealings with I/O value filtering the "time / samples to asymptote" is more useful. I saw "time constant" and read something else.