Method for measuring execution time of function - is it accurate?

Hey people, i’m tinkering with writing my own libraries for educational purposes, so i thought i’d make a small sketch for checking the execution time of functions and operations.

So, what i’d like to ask, is whether this would be accurate?

Basically i start timer1 with a 1x prescaler, making it reset/overflow every 4.096ms (with F_cpu=16000000) and i then check TCNT1 before and after executing a function, i do this 100 times and draw an average, and substract the time it takes to read from TCNT twice.

void setup(){
Serial.begin(9600);
  TCCR1B = (1 << CS10) ;//Start Timer1 with /1 prescaler
}

int before, after, timeSpent;
float average;

void loop() {

  average = 0;

  for (int x = 0; x<100; x++){ //Test the function 100 times

  TCNT1 = 0; //Clear Timer counter
    before = TCNT1;
    // Function to be tested
    after = TCNT1;
  timeSpent = after-before;
  average += timeSpent; 
  }

  average = (((average/100.0)-4.0)*62.5)/1000.0;//get average, substract the time it takes to manipulate TCNT1, multiply it by real time per timer tick and convert to us

  Serial.print(average,3);
  Serial.println("us");

  delay(5000);
}

When I want to measure the time of a function I just use micros() and call the function maybe 1000 times. Something like

startMicros = micros();
for (int n = 0; n < 1000; n++) {
   myFunction();
}
endMicros = micros();
Serial.println(emdMicros - startMicros);

The biggest risk when timing functions is the possibility that the compiler sees the function doing nothing useful and just eliminates it from the code. That makes for very fast execution times.

…R

Yeah, i try to avoid optimisation by using x as a function input where applicable.

I used micros before as well, but it has a resolution of 4us, plus it's a ~27 line function with if-statements, shifting and other stuff, some of these operations alone can take som us'

My method (if it's accurate) has a resolution of 0.0625us, så 64 timers higher, which is the point of this :slight_smile:

Use an I/O pin, a fast toggle, and a scope or analyser.

paniha:
I used micros before as well, but it has a resolution of 4us,

Over 1000 iterations that reduces to 4 nanosecs per iteration if my maths is correct.

…R

Robin2:
Over 1000 iterations that reduces to 4 nanosecs per iteration if my maths is correct.

…R

I’m not sure i follow?
micros() can’t output a value less than 4us, so it can’t be used to measure a timespan below 4us?

paniha:
I'm not sure i follow?
micros() can't output a value less than 4us, so it can't be used to measure a timespan below 4us?

If you run the code 1000 times and you're off by 4us, then when you divide by 1000 to get the time for one single execution your error is now 4ns.

Basic maths.

paniha:
I'm not sure i follow?
micros() can't output a value less than 4us, so it can't be used to measure a timespan below 4us?

get start time

run fn(x) 1000 times

get end time

subtract start time from end time and divide by 1000 -- that division is where precision is increased.

trying to time the execution of each function then add those gets you accumulating tolerances.

You could disassemble the code and count instruction cycles.

Robin2:
Over 1000 iterations that reduces to 4 nanosecs per iteration if my maths is correct.

...R

Duh...

I was locked in on measuring the time of each run of the function, didn't consider doing several runs and taking an average of the total time, instead of a set of intervals.

Cheers!