I am trying to get the average value of a digital signal with varying frequency and pulse width. It's a simple low or high signal usually much less than 300hz that I am sampling at 3000 times a second and I want an average value from 0 to 100.
The following works just fine but I am wondering if I can improve it without too much trouble. (I thought 3000 times a second is getting into the range where it might be a good idea to not waste processor cycles)
//setup
float Average = 0;
// loop @3000hz
Average = Average + ((100 * digitalRead(InputPin) - Average) / 128);
The divisor of 128 works well at 3000hz showing me just the last converging change when I print it once a second. In normal use I would probably use 64 or 32 for 1/2 or 1/4 second convergence.
I attempted to use a Right Shift instead of a division (as explained if I search for "exponential moving average") but I just could not figure out how do do that correctly. Can anyone give me a hint how to Right Shift this example correctly?
I also wonder if the compiler is smart enough to recognize that a division by 128 can be done with a binary shift and do that automatically behind the scenes without any effort on my part?
A completely different way to obtain an average would be to capture the microsecond of the rising and falling edge and calculate the average directly instead of sampling at 3000hz. Does anyone think that would be more efficient? (a bit more complicated but performed more accurately and less often)