Yes, constants are calculated at compile time, and the compiler is pretty good at recognizing things like the fact that (128 / 16) is the same as (128 >> 4) so will implement the fast bit shift instead of doing the division.
Looking at code size alone doesn't tell you anything about performance. In the same way as looking at the size of a car does not tell you how fast it can go, it's what's under the hood that counts.
A good way to optimise code is to look at the assembler output from the compiler. But this is only helpful if you invest the time to learn to read and understand the assembler code.
...if you can take the code out of the ISR and run it in the loop of a sketch, you may be able to measure how changes to the code speed up or slow performance. Particularly if you measure over thousands or millions of iterations.
I'm not sure I understand: if it's calculated at compile time, then why does efficiency matter (other than compile speed)? If what ends up in the code is a constant regardless, why does it matter if it was calculated using division or a bit shift?