smoothing pot input

This works nicely. Thanks.

Seems like it can be used for minor potentiometer output jitter. If I initialize smooth to the same value of the pot, it starts the calculation there, which is good.

I now see the relationship between the two side of that formula. One side is fast to react, the other is slow but better at smoothing. Like the petrol needle in your car, very good at smoothing, but too slow to be used for updating a value based on a pot turning.

In case anyone wants to try this easily, here's my test code.
If you change the pot value - which has +5/-5 value jitter added - you'll quickly see the data output of 'smooth' stabilize.

float smooth; 
int randomized_potval;

void setup() 
{
  smooth = analogRead(5);  // grab sensor value to be initial value in calculation.
  Serial.begin(115200);
}


void loop() 
{
  float potval = analogRead(5);   // read pot on pin 5
  randomized_potval = random( potval - 5, potval + 5);  // add some jitter

  smooth = (0.99 * smooth) + (0.01 * randomized_potval);  // smooth it out

  serial.print(randomized_potval);              // show jittered version of analog 5 value
  serial.print(" ");
  serial.println(int(smooth));            // OUTPUT -  we are looking for this smoothed number to be stable.

  delay (2);
}

skyjumper:
Currently I am storing the last 20 samples and averaging them, but I have been looking for a way to achieve this without consuming 80 bytes (20 floats)... I am getting speed readings several times each second from a transducer. Each of these readings tends to vary a bit from the prior one, and I just want to get a stable reading to use to present to the driver.

α = 0.25 gives a window that almost decays away at 20 samples and can be implemented without floating-point. I think this will work...

unsigned long history;
unsigned short value;

void setup( void )
{
  Serial.begin( 250000 );
  history = analogRead( 0 ) * 4;
}

void loop( void )
{
  history = analogRead( 0 ) + (((3 * history) + 2) / 4);
  value = (history + 2) / 4;

  Serial.println( value );

  delay( 100 );
}

Edit: added rounding.

Thanks!!! I'll play with that...

I think dot-8 rather than dot-2 fixed-point may give slightly more accurate results. Try this one instead...

unsigned long history;
unsigned short value;

void setup( void )
{
  Serial.begin( 250000 );
  history = analogRead( 0 ) * 256;
}

void loop( void )
{
  history = (64*analogRead(0)) + (((64*3*history)+128) / 256);
  value = (history + 128) / 256;

  Serial.println( value );

  delay( 100 );
}

Edit: added rounding.

This looks very interesting, but I don't seem to follow the formula.

Can you explain what it's doing?

The basic formula is...
v1 = (α * analogRead) + ((1 - α) * v0)

α = 0.25 or 1/4 ...
v1 = ((1/4) * analogRead) + ((1 - (1/4)) * v0)
v1 = ((1/4) * analogRead) + ((3/4) * v0)

To make it fixed-point with eight bits for the fraction multiple both sides by 256 (2 to the power of 8)...
256v1 = 256 * { ((1/4) * analogRead) + ((3/4) * v0) }
256
v1 = ((256/4) * analogRead) + ((3256/4) * v0)
256
v1 = ((64) * analogRead) + ((3*64) * v0)

The right-side has "v0" not "256v0" so we have to perform the division when calculating the next value. The multiplication is performed first to preserve the precision...
256
v1 = (64 * analogRead) + ((364v0) / 256)

Finally, to improve the accuracy we need to include rounding...
256v1 = (64 * analogRead) + (((364*v0)+(256/2)) / 256)

So, history is the "actual" value multiplied by 256. Another way to look it: history / 256 is the whole number part and history % 256 is the fractional part.

Nice!

Going back to the original, is the big disavantage that you have to use a float?

Is that much slower? How much?

Float is a lot slower, but I haven't got figures to say exactly how much. At least four times slower.

db2db:
Going back to the original, is the big disavantage that you have to use a float?

Is that much slower? How much?

Floats are bigger as well. In my case, unsigned short int is more than enough space for 2 bytes.

Good point about storage. history in Reply #20 can be unsigned short (half the size of float). Assuming the "raw" values are between 0 and 1023, up to six fractional bits are possible with an unsigned short (instead of multiplying both sides by 256, multiply both sides by 64). Which is a very nice compromise: about 1.5 decimals, smoothing, and fast all from just two bytes! Warning: updating history overflows an unsigned short so the equation will have to be cast to an unsigned long before the right-side multiply.

history in Reply #22 has to remain an unsigned long (same size as float).

Note: I updated #20 and #22 to make them a bit more accurate.

One thing I didn't make clear, is that I get numbers directly from the instrument system. I don't need to do the A/D conversion.

Anyhow, I have been modeling this in a spreadsheet. I used this formula:

=($B$3 * D4) + ((1 - $B$3) * D3)

Where $B$3 is Alpha. Column D contains the series of input values, so in this example D4 is the current input and D3 is the prior input. I believe I am modeling this correctly from looking at your sample source code.

I like this quite a bit. I plotted the input data, the output data from this filter, and the output data from the strategy of averaging the past 5, 10 and 20 samples. I put in some data the is typical of what I usually see.

I found that an alpha of .25 very closely corresponded to averaging the last 5 samples. Alpha == .15 is very close to the past 10 samples. Alpha == .08 approximates averaging the last 20 samples, although not very well in that case (but not at all badly).

Using .25 seems to provide the best results. I get samples at a rate of 2Hz, so thats 2.5 seconds of data.

Getting rid of the past 10 samples saves 18 bytes (since I now only need to save the more recent result). Of course if the user sets the filter to be slower (a lower alpha) then even more bytes are saved. Since I need to use this filter for about 10 different inputs, this will save at least 180 bytes, which is pretty huge.

Is there a name for this filter?

Thank you very, very much, this is going to be a huge improvement for my project!

I'll attach the spreadsheet for anyone who wants to play with it. I did it in Libre Office, but saved it as an XLS file since Office can't read odt. The numbers in the spread sheet are floats, but in the code of course I'll multiply them by 100.

filter-model.xls (18 KB)

Is there a name for this filter?

It is a non recursive low pass filter.

skyjumper:
I believe I am modeling this correctly from looking at your sample source code.

You are.

Thank you very, very much, this is going to be a huge improvement for my project!

You are welcome!

I'll attach the spreadsheet for anyone who wants to play with it.

Thank you. I updated it to also model the code from my previous post...

but in the code of course I'll multiply them by 100.

Which works well if you decide to use the fixed-point version (OUT 5 in the updated workbook).

Is there a name for this filter?

:wink:

filter-model.xls (34.5 KB)

I'll look that over this evening, thanks again...

I have to admit, I don't fully understand what's going on in post #24. The A/D conversion is a 10 bit conversion. Are you just deciding that the fractional part of that should be 8 of those 10 bits? From looking at your revision to the model spreadsheet, I noticed that making that change degrades how accurately the output of the filter reflects the input (when gain/alpha is set to 1). I assume this reflects the reduced precision? Whether or not this is acceptable of course depends on the application and the range of data.

skyjumper:
One thing I didn't make clear, is that I get numbers directly from the instrument system. I don't need to do the A/D conversion.

Anyhow, I have been modeling this in a spreadsheet. I used this formula:

=($B$3 * D4) + ((1 - $B$3) * D3)

Where $B$3 is Alpha. Column D contains the series of input values, so in this example D4 is the current input and D3 is the prior input. I believe I am modeling this correctly from looking at your sample source code.

I like this quite a bit. I plotted the input data, the output data from this filter, and the output data from the strategy of averaging the past 5, 10 and 20 samples. I put in some data the is typical of what I usually see.

Actually I think that I did this wrong. The spread sheet is probably right, but the formula should be:

=($B$3 * D4) + ((1 - $B$3) * E3)

Where $B$3 is Alpha. Column D contains the series of input values and column E has output values, so in this example D4 is the current input and E3 is the prior output.

I noticed some discrepancies as the numbers got bigger and the filter became less effective.

skyjumper:
I have to admit, I don't fully understand what's going on in post #24. The A/D conversion is a 10 bit conversion. Are you just deciding that the fractional part of that should be 8 of those 10 bits?

No. The full 10 bits are used as-is.

Imagine you want to add 1/2 to 1/2. Using integer operations the result is not at all what we want: zero.

Let's take a simple 16 bit integer...

bbbb bbbb bbbb bbbb

...pretend there is a decimal point in the middle...

bbbb bbbb . bbbb bbbb

...also pretend that each bit to the right of the decimal is a fraction of a successive power of two...

bbbb bbbb . 1234 5678

"1" is the 1/2 place
"2" is the 1/4 place
"3" is the 1/8 place
"4" is the 1/16 place
etcetera

...and pretend each bit to the left of the decimal is just a normal integer...

iiii iiii . 1234 5678

The value 9.00 would be stored as...

0000 1001 . 0000 0000

...or 0x0900. The value 0.50 (1/2) would be stored as...

0000 0000 . 1000 0000

...or 0x0080. Adding 1/2 to 1/2 is just like adding normal integers (remember, the decimal isn't really there; we're just pretending it is)...

0000 0000 . 1000 0000
0000 0000 . 1000 0000

0000 0001 . 0000 0000

...or 0x0080 + 0x0080 = 0x0100.

There is an implied / imaginary divided-by-256 always present in our value. So 0x0080 + 0x0080 = 0x0100 can also be viewed as 128 (/256) + 128 (/256) = 256 (/256).

That's essentially what I'm doing in my fixed-point EWMA code. I pretend there is a decimal point to the left of the right-most eight bits (just like the example above). In order to convert from a "normal" integer (like the value returned from analogRead) I have to shift the value left by eight bits so the decimal points are aligned. That's the purpose of multiplying by 256; to shift the whole number into the whole number position.

Does that help?

From looking at your revision to the model spreadsheet, I noticed that making that change degrades how accurately the output of the filter reflects the input (when gain/alpha is set to 1).

I should have mentioned, the alpha for OUT 5 is permanently set at 0.25. No attempt is made to use the value in cell B3.

Just to make certain I understand: You're saying that OUT 1 and OUT 5 are a bit different even though they both have an alpha of 0.25. Correct?

I assume this reflects the reduced precision?

Yes. Excel uses IEEE 64-bit floats which give 11 to 12 total digits (10 to 11 decimals in this case). My code always has 2.4 decimals {log(256)}. The two values will frequently be a bit different but should never diverge for any extended period and should never be more than 0.005 different.

Whether or not this is acceptable of course depends on the application and the range of data.

Exactly. Two more things to consider...

It may not be worth using the fixed-point version because of conversions. In your case, you have the motor speed with two decimals. In order to use the fixed-point version you have to multiply the motor speed by 100 and round to an integer. If you display the smoothed value, you will have to convert the fixed-point value to text. After all that, the floating-point version may actually be more efficient.

The fixed-point version works well when alpha is predetermined and unlikely to change. In your case, you will probably want to experiment with different alpha values which makes the fixed-point version annoying and difficult to use.