Figuring out what compiles to more efficient code?

The project I'm working on is fairly timing sensitive: I'm using clock interrupts, and so the faster the ISR is running the better. I'm trying to make my code as efficient as possible; there are often several ways for me to accomplish the same result, but of course they'll compile differently, and some commands may take fewer clock cycles to execute than others. Is there any good way to figure out which commands compile more efficiently? Is simply looking at the size of the compiled code a reasonable indicator, or is that not a good measure?

For the same reason, I was wondering, if I include arithmetic operations which entirely use constants, will the result be calculated at compile time or at run time? (e.g., will 1 << 4 compile as 16, or will the bitshift get executed every time that code is run?)

Thanks!

Jonathan

Hi Jonathan,

Yes, constants are calculated at compile time, and the compiler is pretty good at recognizing things like the fact that a variable divided by 16 is the same as that variable shifted right by 4, so will implement the fast bit shift instead of doing the division.

Looking at code size alone doesn't tell you anything about performance. In the same way as looking at the size of a car does not tell you how fast it can go, it's what's under the hood that counts.

A good way to optimise code is to look at the assembler output from the compiler. But this is only helpful if you invest the time to learn to read and understand the assembler code. If you will be doing a lot of speed critical programming then it's a good skill to have. If this is a one off thing, it is easier to post the fragment you want optimised to get advice from people that have been through that learning curve.

Another way is to use a timing device to measure the execution speed. That is difficult to do in an ISR if you don't have something like a Logic Analyser, but if you can take the code out of the ISR and run it in the loop of a sketch, you may be able to measure how changes to the code speed up or slow performance. Particularly if you measure over thousands or millions of iterations.

Mem,

Thanks so much for the comprehensive (and quick) reply! Point by point:

Yes, constants are calculated at compile time, and the compiler is pretty good at recognizing things like the fact that (128 / 16) is the same as (128 >> 4) so will implement the fast bit shift instead of doing the division.

I'm not sure I understand: if it's calculated at compile time, then why does efficiency matter (other than compile speed)? If what ends up in the code is a constant regardless, why does it matter if it was calculated using division or a bit shift?

Looking at code size alone doesn't tell you anything about performance. In the same way as looking at the size of a car does not tell you how fast it can go, it's what's under the hood that counts.

That's what I guessed, regarding the code size - obviously loops and functions can decrease code size, but not execution time.

A good way to optimise code is to look at the assembler output from the compiler. But this is only helpful if you invest the time to learn to read and understand the assembler code.

I'm more than willing to do this; somewhat eager, in fact. I programmed a bit in assembler back in High School, and quite enjoyed it. Can you (or anyone else) suggest any good resources for learning about AVR assembler? Also, any tools for making examining the code easier (especially on OS X, but any *nix would be OK)?

...if you can take the code out of the ISR and run it in the loop of a sketch, you may be able to measure how changes to the code speed up or slow performance. Particularly if you measure over thousands or millions of iterations.

This has generally been my strategy for determining speed, but, of course, it has its limitations, which is why I'd like a less brute-force method.

Thanks again!

I'm not sure I understand: if it's calculated at compile time, then why does efficiency matter (other than compile speed)? If what ends up in the code is a constant regardless, why does it matter if it was calculated using division or a bit shift?

Just to say that the first example I posted was not really clear so I edited it a minute later with a better example using a variable (which of course the value of which is not available at compile time). You read my post before the edit.

You can use avr-objdump to look at the assembly code prodced by the compiler...