I'm looking to capture a digital pulse (read via an IR phototransistor on an analog input), and filter out the noise while automatically calculating and continually adjusting the trigger point. My digital pulse is about 30ms, surrounded by long periods (600-160000+ms) of silence. It seems like I can capture from 2 to 11 ADC samples during that high pulse in my current sketch, which includes sending data out the serial port and updating an LCD display every 10 samples. The important variable here is the interval between each pulse.
In my current sketch, I started out reading the analog value and adding it to an array of 25 int as a circular buffer, which I sort to calculate the median and mean values of the array. My "noise" currently varies from 0 at night to about 50 ADC counts over the last day and a half, but I'd like to do my best to make sure I don't have to get in and tweak the sketch in the summer just because the noise got a little higher than a hard-coded trigger value. My peak ADC value for the digital high period has been as high as 800-900, but seems more towards an average of 300-400 adc counts during the period of samples where a high was captured. A friend suggested I do a FIR or FFT on the data, but that seems more geared towards a consistently repeating frequency rather than a pulse that could have a random delay between each period. Would it be to my benefit to use one of those filters or would I be fine to just calculate a standard deviation on the samples collected? Is my buffer large enough or should I double it?
Once I have the information, my thought was to normalize the noise down to zero and scale the peaks to max, and then pick a number somewhere in the upper-middle as the transition level.. thus, hopefully maintaining a healthy response until the noise completely drowns out the signal.