Can program size effect speed?

Hi,

This is just a general question, I can't provide specific examples, but is it possible that different compilations of the same code will execute at different speeds?

There's been a couple of occasions when I've timed how long a particular function takes (for example sorting an array of values) by using millis() before and after the function and working out the difference. What seems to happen is that during testing the function may take (for example) 3-4ms but once I compile the library containing that function into a much bigger project then it starts to take slightly longer (maybe 5-6ms). I've not (yet) tried to do a scientific assessment of this, it's just a general observation. Does this make sense? or is just my imagination? BTW, I'm not talking a one-off test, rather a continuous looping of the same code.

Cheers

but is it possible that different compilations of the same code will execute at different speeds?

No. Every time the same code is compiled, the same hex file is produced.

What seems to happen is that during testing the function may take (for example) 3-4ms but once I compile the library containing that function into a much bigger project then it starts to take slightly longer (maybe 5-6ms).

Then you are not timing the same thing.

Does this make sense?

No.

or is just my imagination?

It's more likely a misunderstanding of what you are timing.

The time to sort something might be affected by the data being sorted - easier/quicker if the data is nearly in the correct state to start with.

...R

easier/quicker if the data is nearly in the correct state to start with.

Depends on the sort algorithm used. Some sort algorithms perform very poorly on already sorted data.

No. Every time the same code is compiled, the same hex file is produced.

true, but if the same code is compiled into a bigger project that includes many other libraries, then the code layout within memory will change presumably?

To be honest, I was thinking that as a project gets bigger and dynamic memory is used up, maybe it doesn't get used as efficiently and memory intensive functions (i.e. array processing) may suffer?

true, but if the same code is compiled into a bigger project that includes many other libraries, then the code layout within memory will change presumably?

Then, it isn't the "same code".

To be honest, I was thinking that as a project gets bigger and dynamic memory is used up

You don't have any dynamic memory. It all just sits there, static. How it is used may involve dynamic allocation of a fixed (static) sized block of memory.

maybe it doesn't get used as efficiently and memory intensive functions (i.e. array processing) may suffer?

No. That might be true on a PC with dynamic memory management, where pages of code get swapped in and out of memory, and an array is spread across many pages, resulting in a lot of page swapping happening.

But, the Arduino doesn't have a clue what page swapping is, or how to implement it. So, it isn't a factor.

Now, there are other factors that can influence how long an iteration of loop() takes, such as serial data arriving/leaving (triggering interrupts on arrival or happening because a timer triggered an interrupt to send data out), or calls to blocking functions like pulseIn() or Serial.readBytesUntil() or Serial.parseInt().