# exponential moving average again

I am trying to get the average value of a digital signal with varying frequency and pulse width. It's a simple low or high signal usually much less than 300hz that I am sampling at 3000 times a second and I want an average value from 0 to 100.

The following works just fine but I am wondering if I can improve it without too much trouble. (I thought 3000 times a second is getting into the range where it might be a good idea to not waste processor cycles)

//setup
float Average = 0;

// loop @3000hz
Average = Average + ((100 * digitalRead(InputPin) - Average) / 128);

The divisor of 128 works well at 3000hz showing me just the last converging change when I print it once a second. In normal use I would probably use 64 or 32 for 1/2 or 1/4 second convergence.

I attempted to use a Right Shift instead of a division (as explained if I search for "exponential moving average") but I just could not figure out how do do that correctly. Can anyone give me a hint how to Right Shift this example correctly?

I also wonder if the compiler is smart enough to recognize that a division by 128 can be done with a binary shift and do that automatically behind the scenes without any effort on my part?

A completely different way to obtain an average would be to capture the microsecond of the rising and falling edge and calculate the average directly instead of sampling at 3000hz. Does anyone think that would be more efficient? (a bit more complicated but performed more accurately and less often)

Sorry, I didn't see that a very similar question is being thrashed out today. However I still can't seem to get the Right Shift correct.

edmcguirk:
//setup
float Average = 0;

// loop @3000hz
Average = Average + ((100 * digitalRead(InputPin) - Average) / 128);

You cannot use bitshifting to do division on float variables

(you could divide an integer by 128 using anInteger >> 7, but the compiler would do that for you anyway).

Since division takes longer than multiplication, you can arrange to always multiply.

This may be of interest:

``````                                        // Exponential Moving Average filter...
#define FILTER_TIME_CONST   100.0       // time constant: time in samples to reach ~ 63% of a steady value
// the rise time of a signal to > 90% is approx 2.5 times this value
// at 3kHz sampling a value of 100 == rise time to 95% of approx 100mS

#define FILTER_WEIGHT (FILTER_TIME_CONST/(FILTER_TIME_CONST +1.0))

Average = (FILTER_WEIGHT * Average ) + (1.0-FILTER_WEIGHT) * (float)newVal ;
``````

Note that FILTER_WEIGHT and (1.0-FILTER_WEIGHT) are constants, so they are pre-computed by the compiler.

If you run a simple test loop setting newVal to 0 and 100 alternately, with the value of 100 for the time constant the Average value will smooth to > 47 in 300 samples and > 49 in 450 samples. see: Exponential_smoothing

Yours,
TonyWilk

``````Average = Average + ((100 * digitalRead(InputPin) - Average) / 128);
``````

This is not a "moving average", exponential or otherwise. There are many threads on this topic elsewhere in the forum, showing the correct approaches. This one is current.

jremington:

``````Average = Average + ((100 * digitalRead(InputPin) - Average) / 128);
``````

This is not a "moving average", exponential or otherwise. There are many threads on this topic elsewhere in the forum, showing the correct approaches. This one is current.

It may not be the equation that you are familiar with but it does converge exponentially to the average. I did read that current thread you referenced after I started this thread but I just can't get their examples to work for me yet.

@TonyWilk

I know it won't work on Float. I have not been able to do the same thing with shifted integers, long integers, or unsigned long integers yet. Either it error's out on compile or it converges to the wrong values.

What I have shown does work fine as is. I was just trying to find something more efficient even if I don't yet need to be more efficient. Your example effectively changes dividing by a constant to multiplying by the inverse of a constant. I suppose that will be a little more efficient.

Some of the examples in those other threads won't compile in 1.0.6 and unfortunately 1.8.5 seems to have a little problem uploading sketches at baud rates faster than 1200. I didn't realize this when I set my sketch at 9600 baud and I could not upload from 1.8.5. I downgraded back to 1.0.6 and succeeded at 2400 baud but I didn't realize that 1.8.5 likes 1200. So now I have to downgrade back to 1.0.6 again to fix my sketch to 1200 baud. Or maybe I'll just leave it at 1.0.6.

It may not be the equation that you are familiar with

It is not by any stretch of the imagination a moving average. The correct terms to describe that operation would be an "Infinite Impulse Response (IIR) Low Pass Filter", with an uncharacterized time constant. Why confuse the issue?

edmcguirk:
I know it won't work on Float. I have not been able to do the same thing with shifted integers, long integers, or unsigned long integers yet. Either it error's out on compile or it converges to the wrong values.

Ah, apologies... hadn't realised you wanted integer math to calc it faster.

Your example effectively changes dividing by a constant to multiplying by the inverse of a constant. I suppose that will be a little more efficient.

Yes, it is a bit more efficient. Not a lot, but every bit helps

I think that an important point about that method is that it avoids "magic numbers"
It is based on a Time Constant that you can pre-determine based on your sampling rate and the settling time you require. There are too many 'smoothing routines' out there which mean you have to run the code and tweak a random value until you get some response you like.

In the other thread that was just mentioned, it looks like PeterPan321 has the right idea.

Yours,
TonyWilk

A moving average is normally a FIR low pass filter (finite impulse reponse) with N coefficients of 1/N,
which can be calculated efficiently with a ring buffer. It has no phase distortion.

A good way to formulate a simple one-pole IIR low-pass filter is:

``````  value += alpha * (input - value) ;
``````

Where alpha is a constant between 0.0 and 1.0 (exclusive), smaller alpha makes the filter slower to respond,
and value is the state that tracks the changing input.

The inherent phase distortion and infinite tail of the IIR filter makes it unsuitable for some purposes
(image processing for example), but its efficient and takes only one variable to implement rather than the
array of buffer required for FIR.

jremington:
It is not by any stretch of the imagination a moving average. The correct terms to describe that operation would be an "Infinite Impulse Response (IIR) Low Pass Filter", with an uncharacterized time constant. Why confuse the issue?

Ok, I was unaware that moving average had a formal definition. I knew it was an IIR and thought that was equivalent.

MarkT:
A moving average is normally a FIR low pass filter (finite impulse reponse) with N coefficients of 1/N,
which can be calculated efficiently with a ring buffer. It has no phase distortion.

A good way to formulate a simple one-pole IIR low-pass filter is:

``````  value += alpha * (input - value) ;
``````

Where alpha is a constant between 0.0 and 1.0 (exclusive), smaller alpha makes the filter slower to respond,
and value is the state that tracks the changing input.

The inherent phase distortion and infinite tail of the IIR filter makes it unsuitable for some purposes
(image processing for example), but its efficient and takes only one variable to implement rather than the
array of buffer required for FIR.

Yes, that's what I am using.

Average = Average + ((100 * digitalRead(InputPin) - Average) / 128);

The 100 term just converts the 1 or zero value to 100 or 0. Alpha is 1/128. Phase distortion is not important in my case.

TonyWilk:
Ah, apologies... hadn't realised you wanted integer math to calc it faster.
Yes, it is a bit more efficient. Not a lot, but every bit helps

I think that an important point about that method is that it avoids avoids "magic numbers"
It is based on a Time Constant that you can pre-determine based on your sampling rate and the settling time you require. There are too many 'smoothing routines' out there which mean you have to run the code and tweak a random value until you get some response you like.

In the other thread that was just mentioned, it looks like PeterPan321 has the right idea.

Yours,
TonyWilk

I could figure out the time constant based on 3000 samples per second and an alpha of 1/128 but it's quicker to just bump up or down by powers of 2 and see how long it takes to settle. I don't know if the compiler will make any special optimization when it compiles a division by a power of 2 but unless I convert to binary shifting the choice of value for alpha will otherwise be arbitrary for me.

Apparently PeterPan321's original example had an error and it does not converge properly when I copy it. I haven't been able to understand yet where he corrected it.

edmcguirk:
I am trying to get the average value of a digital signal with varying frequency and pulse width. It's a simple low or high signal usually much less than 300hz that I am sampling at 3000 times a second and I want an average value from 0 to 100.

....

Does anyone think that would be more efficient?

If you just want to get the average value, why does it need to be exponential average?
The normal average is faster to calculate (e.g. you do it every 300 samples again)

``````int average(n)
{
int sum = n / 2; // for rounding
for (int i = 0; i < n; i++)
{
}
return avg = sum/(n/100);
}
``````

A more running average kind of way can also be done pure in integer math.
just mimic float by having all values * 100. (use power of 2 might even be faster)

``````uint32_t avg = 0;

uint32_t test_int2(int n)
{
for (int i = 0; i < n; i++)
{
avg *= 99;
avg /= 100;
}
return avg;
}
``````

avg will be between 0 and 10000

update - using powers of 2 is even faster, but code gets awkward (magic nrs appear)

``````    avg = avg * 128 - avg;  //  ==>   * 127  one shift + subtract
if (digitalRead(InputPin)) avg += 7813;   // 10000 *  100/128
avg /= 128;                 // fast shift iso divide
``````

robtillaart:
A more running average kind of way can also be done pure in integer math.

Your code there got me thinking... how much slower is my float example from earlier ?

I got my answer (and still worry I've done something daft - but division is division!)

Then messed with your powers-of-2 thing.

The results (running on an Arduino Uno) are:

``````Timing Test: int, powers_of_2 and float exponential filters
to filter 500 samples of a value stepped 0 to 100 with a time constant of 128 samples
The exponential should be: (1-1/e^(500/128))*100 = 97.98
(Times in uSecs are per iteration)

f_int= 98.14  uSecs= 41.54          <-- integer
f_p2= 97.96  uSecs= 6.79            <-- powers-of-2
f_float= 97.96  uSecs= 17.58        <-- float
``````

The code is below.
I attempted to make the 3 methods 'compatible' and chose TC=128 so the results are comparable.

``````#define DBG( var ) do{Serial.print(#var);Serial.print("= ");Serial.print(var);Serial.print("  ");}while(0)
#define CRLF() Serial.println()

// Exponential Moving Average filter...
#define FILTER_TIME_CONST   128.0       // time constant: time in samples to reach ~ 63% of a steady value
// the rise time of a signal to > 90% is approx 2.5 times this value
// at 3kHz sampling a value of 100 == rise time to 95% of approx 100mS

#define FILTER_WEIGHT (FILTER_TIME_CONST/(FILTER_TIME_CONST +1.0))

// using integer
// - numbers are *1000 for "3 fixed decimals"
float test_int2( int n )
{
uint32_t avg = 0;
uint32_t val= 100;
uint32_t decshift= 1000;
uint32_t weight = round(FILTER_WEIGHT * (float)decshift);
uint32_t weight2= round((1-FILTER_WEIGHT) * (float)decshift);

for (int i = 0; i < n; i++)
{
avg = (weight * avg ) + (weight2 * (val * decshift));
avg= avg / decshift;
//val= (val == 0) ? 100 : 0; // simulate toggling input
}
return (float)avg / (float)decshift;
}

// using powers of 2
// - numbers are * 1024 for "sort of 3 fixed decimals" :)
// - filter is limited to approx time constants of powers of 2
float test_p2( int n )
{
uint32_t avg = 0;
uint32_t val= 100;
byte decShift= 10;        // shift for "number of decimal places", 10 = *1024
byte weightShift= 7;      // approx. time constant=128,  bitshift= 7

for (int i = 0; i < n; i++)
{
avg= (avg << weightShift) - avg;  // *127
avg += (val << decShift);         // +1 (*decimal shift)
avg = avg >> weightShift;         // /128
//val= (val == 0) ? 100 : 0; // simulate toggling input
}
return (float)avg / (float)( 1 << decShift);
}

// using float
float test_float2( int n )
{
float favg = 0.0;
int val= 100;
for( int i = 0; i < n; i++)
{
favg = (FILTER_WEIGHT * favg ) + (1.0-FILTER_WEIGHT) * (float)val ;
//val= (val == 0) ? 100 : 0; // simulate toggling input
}
return favg;
}

void setup() {
Serial.begin(9600);
Serial.println("Timing Test: int, powers_of_2 and float exponential filters");
Serial.println("to filter 500 samples of a value stepped 0 to 100 with a time constant of 128 samples");
Serial.println("The exponential should be: (1-1/e^(500/128))*100 = 97.98");
Serial.println("(Times in uSecs are per iteration)\n");
}

void loop()
{
float f_int, f_p2, f_float, uSecs;
uint32_t t;
int nSamples= 500;

t= micros();
f_int= test_int2( nSamples );
uSecs= (float)(micros() - t)/(float)nSamples;
DBG(f_int); DBG(uSecs); CRLF();

t= micros();
f_p2= test_p2( nSamples );
uSecs= (float)(micros() - t)/(float)nSamples;
DBG(f_p2); DBG(uSecs); CRLF();

t= micros();
f_float= test_float2( nSamples );
uSecs= (float)(micros() - t)/(float)nSamples;
DBG(f_float); DBG(uSecs); CRLF();

CRLF();
delay(5000);
}
``````

Yours,
TonyWilk

robtillaart:
If you just want to get the average value, why does it need to be exponential average?
The normal average is faster to calculate (e.g. you do it every 300 samples again)

``````int average(n)
``````

{
int sum = n / 2; // for rounding
for (int i = 0; i < n; i++)
{
}
return avg = sum/(n/100);
}

My thinking may be wrong but the signal I am trying to measure might go as high as 300hz and if I want 10% accuracy on the pulse width, I'll have to sample at 3000hz (although I guess technically a Nyquist frequency of 600hz is sufficient especially with a digital filter cutoff in the neighborhood of several tenths of a second). However the signal might also go as low as 10hz. I can't stop and wait for an average across a tenth of a second. There are other things I am doing on the 100th of a second. I could merge the the collecting of samples for an average into the body of the rest of the program but in my mind, it's just easier to have a quick one liner that samples constantly. (still setting aside my other idea of using an external interrupt to collect the timestamp of the rising and falling edges instead of sampling)

Mimicking a low pass filter is closest to what I am trying to do.

Again, what I have works. Looking at the equation in my other post, I noticed that it's pretty dumb to repeatedly perform a multiplication to create a value that I already know will only be 0 or 100.

Average = Average + ((100 * digitalRead(InputPin) - Average) / 128);

should be:

If signal is high
Average += (100 - Average) * (const 1/128)
else
Average -= Average * (const 1/128)

I don't yet know that I need it to be any faster but it annoys me that I can't quite get binary shifting to work. I am not in front of my arduino right now so I can't play around with it but I suspect my problem may be related to the size of my alpha in relation to the average value plus truncation errors. I kept getting results where the DC measurements were converging to 100 - 7 when alpha was 1/8 and 100 - 15 when alpha was 1/16 (if I remember correctly).

(... some more thought here...)

Now it should be obvious to an idiot (me) that the following should work?

Unsigned Long Int AvgShifted = 0;
Unsigned Long Int Avg = 0;
Int ShiftAmount = 7; // 7 for 128, 6 for 64

AvgShifted = Avg << ShiftAmount;

If signal is high
AvgShifted += 100 - Avg;
else
AvgShifted -= Avg;

Avg = AvgShifted >> ShiftAmount;

I will try to fix my IDE problems later today so I can experiment and copy some examples.

@robtillaart
I didn't read your comment till now but I like the idea of eliminating the "else" and reusing the Avg variable.

TonyWilk:
Your code there got me thinking... how much slower is my float example from earlier ?

I got my answer (and still worry I've done something daft - but division is division!)

Then messed with your powers-of-2 thing.

The results (running on an Arduino Uno) are:

``````Timing Test: int, powers_of_2 and float exponential filters
``````

to filter 500 samples of a value stepped 0 to 100 with a time constant of 128 samples
The exponential should be: (1-1/e^(500/128))*100 = 97.98
(Times in uSecs are per iteration)

f_int= 98.14  uSecs= 41.54          <-- integer
f_p2= 97.96  uSecs= 6.79            <-- powers-of-2
f_float= 97.96  uSecs= 17.58        <-- float

factor 2.59 of the float version, not bad at all

Have you also measured the timing of the original code of the OP?

robtillaart:
Have you also measured the timing of the original code of the OP?

No.
Wait a mo.... yes:

``````Timing Test: int, powers_of_2, float and Original OP's exponential filters
to filter 500 samples of a value stepped 0 to 100 with a time constant of 128 samples
The exponential should be: (1-1/e^(500/128))*100 = 97.98
(Times in uSecs are per iteration)

f_int= 98.14  uSecs= 41.54
f_p2= 97.96  uSecs= 6.78
f_float= 97.96  uSecs= 17.59
f_origOP= 98.02  uSecs= 24.44        <-- original OP's algorithm
f_origOPnoDiv= 98.02  uSecs= 24.44   <-- original OP's algorithm with div replaced by mult 1/n
``````

Code is:

``````#define DBG( var ) do{Serial.print(#var);Serial.print("= ");Serial.print(var);Serial.print("  ");}while(0)
#define CRLF() Serial.println()

// Exponential Moving Average filter...
#define FILTER_TIME_CONST   128.0       // time constant: time in samples to reach ~ 63% of a steady value
// the rise time of a signal to > 90% is approx 2.5 times this value
// at 3kHz sampling a value of 100 == rise time to 95% of approx 100mS

#define FILTER_WEIGHT (FILTER_TIME_CONST/(FILTER_TIME_CONST +1.0))

// using integer
// - numbers are *1000 for "3 fixed decimals"
float test_int2( int n )
{
uint32_t avg = 0;
uint32_t val= 100;
uint32_t decshift= 1000;
uint32_t weight = round(FILTER_WEIGHT * (float)decshift);
uint32_t weight2= round((1-FILTER_WEIGHT) * (float)decshift);

for (int i = 0; i < n; i++)
{
avg = (weight * avg ) + (weight2 * (val * decshift));
avg= avg / decshift;
//val= (val == 0) ? 100 : 0; // simulate toggling input
}
return (float)avg / (float)decshift;
}

// using powers of 2
// - numbers are * 1024 for "sort of 3 fixed decimals" :)
// - filter is limited to approx time constants of powers of 2
float test_p2( int n )
{
uint32_t avg = 0;
uint32_t val= 100;
byte decShift= 10;        // shift for "number of decimal places", 10 = *1024
byte weightShift= 7;      // approx. time constant=128,  bitshift= 7

for (int i = 0; i < n; i++)
{
avg= (avg << weightShift) - avg;  // *127
avg += (val << decShift);         // +1 (*decimal shift)
avg = avg >> weightShift;         // /128
//val= (val == 0) ? 100 : 0; // simulate toggling input
}
return (float)avg / (float)( 1 << decShift);
}

// using float
float test_float2( int n )
{
float favg = 0.0;
int val= 100;
for( int i = 0; i < n; i++)
{
favg = (FILTER_WEIGHT * favg ) + (1.0-FILTER_WEIGHT) * (float)val ;
//val= (val == 0) ? 100 : 0; // simulate toggling input
}
return favg;
}

// Test the OP's (edmcguirk) original algorithm
//
float test_OrigOP( int n )
{
float Average= 0.0;
int val= 100;
for( int i=0; i<n; i++ )
{
// Original posted algorithm:
// Average = Average + ((100 * digitalRead(InputPin) - Average) / 128);
Average = Average + (((float)val - Average) / 128);
}
return Average;
}

// Test the OP's (edmcguirk) original algorithm without divisioin
//
float test_OrigOPnoDiv( int n )
{
float Average= 0.0;
float noDiv= 1.0 / 128.0;
int val= 100;
for( int i=0; i<n; i++ )
{
// Original posted algorithm:
// Average = Average + ((100 * digitalRead(InputPin) - Average) / 128);
Average = Average + (((float)val - Average) * noDiv);
}
return Average;
}

void setup() {
Serial.begin(9600);
Serial.println("Timing Test: int, powers_of_2, float and Original OP's exponential filters");
Serial.println("to filter 500 samples of a value stepped 0 to 100 with a time constant of 128 samples");
Serial.println("The exponential should be: (1-1/e^(500/128))*100 = 97.98");
Serial.println("(Times in uSecs are per iteration)\n");
}

// Test timing of filter functions
// - use Serial Monitor to view thee results
//
void TestTiming()
{
float f_int, f_p2, f_float, f_origOP, f_origOPnoDiv, uSecs;
uint32_t t;
int nSamples= 500;

t= micros();
f_int= test_int2( nSamples );
uSecs= (float)(micros() - t)/(float)nSamples;
DBG(f_int); DBG(uSecs); CRLF();

t= micros();
f_p2= test_p2( nSamples );
uSecs= (float)(micros() - t)/(float)nSamples;
DBG(f_p2); DBG(uSecs); CRLF();

t= micros();
f_float= test_float2( nSamples );
uSecs= (float)(micros() - t)/(float)nSamples;
DBG(f_float); DBG(uSecs); CRLF();

t= micros();
f_origOP= test_OrigOP( nSamples );
uSecs= (float)(micros() - t)/(float)nSamples;
DBG(f_origOP); DBG(uSecs); CRLF();

t= micros();
f_origOPnoDiv= test_OrigOPnoDiv( nSamples );
uSecs= (float)(micros() - t)/(float)nSamples;
DBG(f_origOPnoDiv); DBG(uSecs); CRLF();

CRLF();
delay(5000);
}

void TestResponse()
{
// integer:
uint32_t avg = 0;
uint32_t val= 100;
uint32_t decshift= 1000;
uint32_t weight = round(FILTER_WEIGHT * (float)decshift);
uint32_t weight2= round((1-FILTER_WEIGHT) * (float)decshift);

// powers of 2
uint32_t avg2 = 0;
byte decShift= 10;        // shift for "number of decimal places", 10 = *1024
byte weightShift= 7;      // approx. time constant=128,  bitshift= 7

// float:
float favg = 0.0;

// OP original:
float Average= 0.0;

// OP original, no division:
float Average2= 0.0;
float noDiv= 1.0 / 128.0;

for( int i=0; i<500; i++ )
{
// integer:
avg = (weight * avg ) + (weight2 * (val * decshift));
avg= avg / decshift;
//val= (val == 0) ? 100 : 0; // simulate toggling input

// powers of 2:
avg2= (avg2 << weightShift) - avg;  // *127
avg2 += (val << decShift);         // +1 (*decimal shift)
avg2 = avg2 >> weightShift;         // /128

// float:
favg = (FILTER_WEIGHT * favg ) + (1.0-FILTER_WEIGHT) * (float)val ;

// OP original:
// Original posted algorithm:
// Average = Average + ((100 * digitalRead(InputPin) - Average) / 128);
Average = Average + (((float)val - Average) / 128);

// OP original - without division !
Average2 = Average2 + (((float)val - Average2) * noDiv);

//val= (val == 0) ? 100 : 0; // simulate toggling input

Serial.print((float)avg / (float)decshift); Serial.print(',');
Serial.print((float)avg2 / (float)( 1 << decShift)); Serial.print(',');
Serial.print(favg); Serial.print(',');
Serial.print(Average); Serial.print(',');
Serial.print(Average2); Serial.println();
}
}

// **NOTE**
// - comment in/out only one of the functions and use appropriate Serial monitoring.
//
void loop()
{
// Use Serial Monitor to view result from this test function:
TestTiming();

// User Serial Plotter to view results from this function:
//  TestResponse();
}
``````

The code has the option to comment-out the TestTiming() function and un-comment TestResponse()

• which, when viewed in the Serial plotter, shows there is little difference between any of them.

Seeing the time for the original which did a division, I also tried a version which replaced the divide with a multiply... what I can't figure out is why the time for div and mult versions are exactly the same. (*)

Yours,
TonyWilk

(*) maybe because I've just burned my brain out at the pub quiz, and I was thirsty.

Actually it's worse than that because I was multiplying by 100 in every iteration.

I finally got it to work properly. I was not properly copying the good advice given in these two threads. This what I am now using:

ShiftedAVG -= Average;

Average = (ShiftedAVG >> ShiftAmount);

I decided not to reuse the Average variable and drop the the initialization every iteration of the shifted average:

ShiftedAVG = Average << ShiftAmount;

because that would introduce truncation two lines later by losing the rightmost bits of the accumulated shifted average between cycles.

The generalized version should be:

ShiftedAVG += NewValue - Average;

Average = (ShiftedAVG >> ShiftAmount);

Not sure if I need to include the rounding compensator in PieterP's EMA class.
here

Anyway, now that I have the average filter figured out, what do you-all think about capturing the rising and falling edge of this digital signal and adding up the microseconds the signal is high over the course of a standard time window (probably 1/2 or 1/4 second) as opposed to sampling the digital pin 3000 times a second.

I'm not yet familiar with using an external interrupt but it seems to me that I could get more accuracy with fewer executed lines of code. I suppose I could even accumulate a running average every interrupt if I also change my Alpha to be based on microseconds instead of 3000 samples per second. Even though the actual calculations would be running asynchronously it would still converge exponentially but the slope would change every time the input signal changes state instead of 3000 times per second.

Think it's worth the effort?

edmcguirk:
Anyway, now that I have the average filter figured out, what do you-all think about capturing the rising and falling edge of this digital signal and adding up the microseconds the signal is high over the course of a standard time window (probably 1/2 or 1/4 second) as opposed to sampling the digital pin 3000 times a second.

I'm not yet familiar with using an external interrupt but it seems to me that I could get more accuracy with fewer executed lines of code. I suppose I could even accumulate a running average every interrupt if I also change my Alpha to be based on microseconds instead of 3000 samples per second. Even though the actual calculations would be running asynchronously it would still converge exponentially but the slope would change every time the input signal changes state instead of 3000 times per second.

Think it's worth the effort?

It is worth the effort for five reasons:

1. it is indeed more accurate
2. you learn to use external interrupts,
3. you learn the word volatile
4. you learn that micros() counts in steps of 4 uSec.
5. you learn cli() and sei() - noInterrupts() and interrupts()

Here a starter for determining the duty cycle. It measures a totalTimae and the time the signal is high.
from those two the % HIGH can be calculated.

Home work is

1. add the measurement of the frequency

2. add a restart of the measurement

3. write a new sketch that measures 1 HIGH and 1 LOW period
and indicates with a flag that the measurement is ready.

4. change the new sketch so you can make a new measurement when needed e.g. with a flag

``````//    FILE: dutyCycle.ino
//  AUTHOR: Rob Tillaart
// VERSION: 0.0.1
// PURPOSE: demo
//    DATE: 2018-02-26

volatile uint32_t durationHIGH = 0;
volatile uint32_t startTime = 0;

void setup()
{
Serial.begin(115200);
Serial.println(__FILE__);
pinMode(2, INPUT);

attachInterrupt(0, measure, CHANGE);
// ISR 0 == pin 2;  measure is the ISR function below ; CHANGE is the trigger
}

void loop()
{
noInterrupts();
uint32_t highTime  = durationHIGH;
uint32_t totalTime = micros() - startTime;
interrupts();

float ratio =  (float)highTime / (float)totalTime;
Serial.print("ratio:\t");
Serial.println(ratio);
delay(100);
}

void measure()
{
static uint32_t startHigh = 0;
uint32_t now = micros();

{
durationHIGH += (now - startHigh);
}
else
{
if (startTime == 0)  // reset only when signal goes HIGH
{
startTime = now;
durationHIGH = 0;
}
startHigh = now;
}
}
``````