Execution time measurement tool for Arduino software

Hi,

This is my first post :slight_smile: I’ve been working on Arduino Mega for about a month. So far so good. It is very easy to write program for Arduino and I really appreciate it.

Anyway, I have a suggestion about the development environment. Most of the programmers would need to measure the execution time for any interrupt service routine. And sometimes for loops. The programs that we write must be short enough to let the processor run other programs or IRS will lock the processor. But it’s hard to determine the real execution time in terms of microseconds by software. It requires every programmer to write a test program and output the start and end time to measure the time.

My suggestion is that, since the compiler provides machine codes for those routines, it must be easy to determine, or at least predict an interval for, the execution time. It would be great if there was a tool which did all those calculations for us. Even maybe for whole program’s execution time (like setup: some microseconds & loop: from some microseconds to some more).

Please let me know if I miss any point. Or if there is already a tool that does the calculation.

It's not that easy. For example if you have an "if" then whether or not it is taken will affect results. If you have a loop (eg. for, while) then how many times through the loop you go will affect results. If you call a function then you need to know how long that function will take. And the same comments apply to the called function.

In the absence of loops, ifs, function calls etc. you can predict something along the lines of 3.5 uS for an ISR (I think to get going) and probably another 3.5 uS to wrap up. Plus around 62.5 nS for each clock cycle. Instruction generally take between 1 and 3 clock cycles.

Thanks for the quick reply.

You are right. But I think it wouldn’t be that complicated if it could just provide some time interval. Or would it?

For if statements, it could simply add its execution time to the predicted maximum time without changing the predicted minimum time. It may be impossible to determine how long while() loops will take to run. But it would be great to have a tool that at least tells me how long each block would take. And besides, an ISR with a “possibly infinite” while loop is a rare thing to come across I guess :smiley: If there is such a loop in an ISR, it means that in that condition, we want the processor to lock in that ISR until things are fixed.

Determining the execution time for “for(int i=0;i<100;i++){}” may seem easy. But when the loop variable i is updated during the iterations, things would start to change. But atleast we should have an idea for how long the block would take regardless of the number of repetitions.

Determining the execution time for “for(int i=0;i<100;i++){}” may seem easy.

It may seem easy, but the compiler could simply optimise that particular loop out.
To avoid this, you could qualify “i” with “volatile”, but that wouldn’t necessarily be representative.

AWOL:

Determining the execution time for “for(int i=0;i<100;i++){}” may seem easy.

It may seem easy, but the compiler could simply optimise that particular loop out.
To avoid this, you could qualify “i” with “volatile”, but that wouldn’t necessarily be representative.

I left that loop example empty on purpose, for the sake of simplicity :slight_smile: But the compiler may change the behaviour a lot in order to reduce runtime, etc. So it is a good point. Calculator may use the compiled binaries for a less erronous result. If there will be any in the future.

acarb:
The programs that we write must be short enough to let the processor run other programs or IRS will lock the processor.

That's the Internal Revenue Service, right? Boy are they tough!

Seriously, either you are in the ball park or not. An ISR should quickly set flags, or do what it has to do. The setup/teardown time for an ISR (around 7 uS) plus the unavoidable time of setting a flag or two gives you a figure of (say) 8 uS. Plus you have the issue that other code may disable interrupts for a few clock cycles. Basically if I was using ISRs I would want to have quite a bit of time in hand, just in case. So let's say not more than about 10,000 times a second. Maybe 50,000 if you are lucky.

acarb:
Anyway, I have a suggestion about the development environment. Most of the programmers would need to measure the execution time for any interrupt service routine. And sometimes for loops. The programs that we write must be short enough to let the processor run other programs or IRS will lock the processor. But it's hard to determine the real execution time in terms of microseconds by software. It requires every programmer to write a test program and output the start and end time to measure the time.

This reminds me of siemens PLCs that normally provide the loop time in OB1. It normally also keeps the maximum and minimum too.

Something like that could be programmed within the loop() definition (not sure people would like it though), and you could use the updated value in every cycle. Like Nick said, it depends if this if is true, or if the size of that array is longer or smaller, etc...

It could be useful for PID control, where you'd know how much time elapsed since last call. But how much overhead would it bring?

The program that I suggested would run in compile time, not in run time. All I ask is to be able to see how long it will take for my program to run. Not being able to use the time that previous loop took to run during runtime. It can be done using functions micros() and millis(). The problem is that everybody must do it seperately for any runtime measurement.

acarb:
The program that I suggested would run in compile time, not in run time.

You can't. There is no solution.

well, why not? A block of assembly code with no jump instruction would have a constant runtime, wouldn't it? Can't we just go through the binary code to predict something?

Let's say we are using Timer1 and the function that I attached to it is so large it takes more time to run than its timer's period. Wouldn't it be nice to get "just in case" warnings?

There are solutions. Usually called "worst case execution time analysis".
e.g. this one: aiT Worst-Case Execution Time Analyzers

BTW: I would prefer a runtime-analysis.
gprof should be the correct tool for this. But has it been ported to AVR and Arduino?
At least gprof is part of the gcc toolchain...

Oliver

olikraus:
There are solutions. Usually called "worst case execution time analysis".
e.g. this one: aiT Worst-Case Execution Time Analyzers

BTW: I would prefer a runtime-analysis.
gprof should be the correct tool for this. But has it been ported to AVR and Arduino?
At least gprof is part of the gcc toolchain...

Oliver

Yes, I've heard about them, but as far as I know, Arduino development environment directly uploads the binary codes to the board. It does not produce an output file which would be needed for this WCET. Thus, we need Arduino software to do this for us.

I am currently working on my project, trying to get some runtime values using millis() and micros().

Anyway, This gave me hope, I think this tool is implementable and useful.

acarb:
Yes, I've heard about them, but as far as I know, Arduino development environment directly uploads the binary codes to the board. It does not produce an output file which would be needed for this WCET.

Not at all. How could it upload without producing a file? It produces files which are easy to inspect, and no doubt you could write software to do further analysis of them.

Hold down Shift while clicking the Verify button and you'll see a whole heap of files produced. I often do this and analyse the results with a view to understanding what is happening.

Hmm. I didn't know that shift + click option. But all it does is producing object files in a temporary folder (which is in C:\Documents and Settings\user\Local Settings\Temp\build8360526333648373713.tmp) and immediately deletes the temporary folder.

I still don't know a way to get any binary file before they are utterly destroyed by Arduino software :stuck_out_tongue: Maybe a file rescue program would help but this is going way too far.

and immediately deletes the temporary folder.

That doesn't happen to me.

AWOL:

and immediately deletes the temporary folder.

That doesn't happen to me.

Sorry, seems that I somehow missed the folders among others. My mistake.

Guess the project.cpp.hex is the binary executable written in readable ASCII, isn't it?

acarb:
Hmm. I didn’t know that shift + click option. But all it does is producing object files in a temporary folder (which is in C:\Documents and Settings\user\Local Settings\Temp\build8360526333648373713.tmp) and immediately deletes the temporary folder.

No, it doesn’t. I often look at the files afterwards, without resorting to undeleting them. For example:

 C:\DOCUME~1\Owner\LOCALS~1\Temp\build2201926831805352004.tmp\Blink.cpp.elf

That was there afterwards. It’s still there. It is probably clogging up my hard disk.

acarb:
Guess the project.cpp.hex is the binary executable written in readable ASCII, isn't it?

In a terminal window do something like:

avr-objdump -S FILENAME.elf

For .hex files:

avr-objdump -j .sec1 -d -m avr5  FILENAME.hex

(You shouldn't need to use the .hex approach after a compile, use the .elf file). But this helps disassemble something like a bootloader you don't have the source to.