The working principle of this lib is that for each period of signal, it counts how many cpu clock cycles, and convert that count into actual time length. It has some buffer, so one can measure multiple periods and get the result once. However I found some issue.

I used a high quality signal generator to test how much resolution this lib can do and try to understand how the frequency is measured . I find a strange issue. I started at 1600Hz and went to 1600.001, 1600.002, 1600.003 all the way to 1600.020. But the frequency generated by FreqMeasure.countToFrequency are the same 1600.00000. But I printed the result count/sum*16000000, it shows
1599.966064
1599.972168
1599.972778
1599.973755
1599.974487
1599.975342
1599.976318
1599.977051
1599.977905
1599.978516
1599.979614
1599.980225
1599.981079
1599.980835

quite linear with the input signl frequency. And the hardware actually could measure the tiny frequency difference. It seems some round operation is done in the librarary or some digitization issue. Is there a way to fix it?

And the frequency generated by FreqMeasure.countToFrequency is not a simple factor*count/sum, right?

#include <FreqMeasure.h>
void setup() {
Serial.begin(115200);
FreqMeasure.begin();
}
double sum=0;
int count=0;
void loop() {
if (FreqMeasure.available()) {
// average several reading together
sum = sum + FreqMeasure.read();
count = count + 1;
if (count > 2000) {
float frequency = FreqMeasure.countToFrequency(sum / count);
Serial.println(frequency,12);

Serial.println(count/sum*16000000,12);
sum = 0;
count = 0;
}

}
}

I also posted this issue on the githut of the author. I think the sum/count result shows that, the chip can resolve 0.01Hz frequency correctly. But the FreqMeasure.countToFrequency() function does some kind of rounding. I tried to open the lib files but not able to understand the whole work flow in it. I think a very simple modification can solve this problem.

please fix the code in your first post by adding the code tags where they are necessary (read How to get the best out of this forum and modify your post accordingly)

To your question:

when you call FreqMeasure.countToFrequency(sum / count), as the function expects a uint32_t, the division is truncated and sum / count is no longer a floating point number, just the integral part.

when you do the maths, count/sum*16000000 the whole computation is done as a double precision , thus leading to a different result

Untested but You could add to the library the function (prototype in .h and code in .cpp) with a double parameter as signature:

Hi, @joedodo
What resolution do you need.
You may need to go to a Crystal Oven to keep the crystal stable?
The clock crystals are not EXACT or superstable , if they were we would probably be paying 5 to 10 times more for a controller board.

Your signal generator is outputting changes at 0.001 Hz, so why are you trying to measure down to 0.000001 Hz.
If you change your output value to 0.001Hz resolution, what do you get.
Then apply a calibration factor to get the output to agree with the input.
No two CPU crystals are the same.

Thanks. I was using uno. So the original FreqMeasure.countToFrequency() only multiplys the sum/count by a F_CPU coefficient? And the result is due to rounding? I thought the convertion was a quite complex one.

FreqMeasure.countToFrequency is a simple multiply, but it returns a single precision float which has 6 or 7 significant digits. My guess is that this has the effect of rounding all of your example values to the same number. If you continued to increase your test frequency, you would eventually see another discrete step in the result.

count/sum*16000000 returns a double precision float which has about 15 significant digits.

As per response #3, whatever board you're using appears to have a frequency offset error on the order of a few pars per 10000 which is typical for computer grade crystals.

because the function you call takes an integer as a parameter, when you call FreqMeasure.countToFrequency(sum / count) the compiler first calculates sum / count.

Because sum is a double, the math is done using double and the result is a double.

Then because of the call to the function requiring an integer, that parameter is transformed into an integer (uint32_t) by dropping the decimal part.

the function does the division listed above using simple precision maths and returns the result.

when you do your self (F_CPU * count / sum ) which is mathematically the same thing, you don't have any rounding to the integer happening, the whole thing is calculated using a double (whatever that is on your platform, on the UNO it's simple precision math, the same as float), and thus you get a different result.

to take an example if you do (1/3) * 3 with or without rounding down the (1/3) you'll get very different results

1/3 = 0 if you round it down and then 0 * 3 = 0

if you do the whole thing using floating point arithmetics, then you get 1