Hi,
I’ve been using interrupts to use the maximum sample frequency for every string. I’ve combined it with continuous sampling on A0. The OCR value is calculated with the following formula:
OCR1A = (time between interrupts / 0.0000000625) – 1
The time between the interrupts is calculated with the maximum period, which I expect to be at a frequency that is about 10 Hz (this is the correction) lower than the fundamental. This period is multiplied with 2, to get the desired window for the autocorrelation, and then divided by the number of samples. See below:
OCR value = 2 * (1 / (string – correction)) / arraySize / 0.0000000625 – 1
The sample frequency is calculated with this OCR value, to eliminate rounding errors.
While testing I noticed that the resolution is better when I change the 2 to 3 or 4. So instead of using 2 periods, I’m using 3 or 4 and therefore lowering the sample frequency. Actually, it’s also quite unstable when using the formula above. The results with the increased number of periods are reasonable. However, I’d like to know what is wrong with my calculations and how that I can improve them. I’ve attached the code.
Thanks in advance,
Bart
autocorrelation_onA0.ino (2.26 KB)