Compilation time vs computer hardware

This is more of a curiosity than anything else, but the thing is that I upgraded my computer recently from a Phenom II 965 to a Ryzen 7 1700, which has a bit more processing power, however I noticed that the compilation times are more or less the same. How does this work?

I suspect that hard disk performance has more influence than raw processor power.

sterretje:
I suspect that hard disk performance has more influence than raw processor power.

I also changed from sata HDD to pci-e SSD, approximately 20x faster r/w.

I don't suspect that disk access times has anything to do with compilation times as all the code (compiler and sketch) are already in RAM before compilation starts (except, maybe the libraries that have to be reloaded in case thay have been changed in the meantime).

You may have noticed that the first time a sketch is compiled it takes a lot longer than the subsequent times (when you only made changes to the sketch. The message that the IDE sends is that the compilation values have changed, and that it has to start all over again. (I am just translating here since I use the french IDE).

I woundn't expect a big change in compile time with the change of your computer. If you run a benchmark test on both computers, you might find that only 5 to 10% increase in processing power (and/or speed) That would make a 60 seconds compile time to drop to 55, 50 seconds. Nothing to be really noticable.

Jacques

EDIT : just checked with userbenchmark.com, and you new processor should be more than 2 times as fast...

jbellavance:
I don't suspect that disk access times has anything to do with compilation times as all the code (compiler and sketch) are already in RAM before compilation starts (except, maybe the libraries that have to be reloaded in case thay have been changed in the meantime).

Not sure how you could possibly reach that conclusion, given that the compiler operates on files, NOT memory images....
Regards,
Ray L.

given that the compiler operates on files, NOT memory images...

My wrong then.

Jacques

RayLivingston:
Not sure how you could possibly reach that conclusion, given that the compiler operates on files, NOT memory images....
Regards,
Ray L.

But any decent OS will cache files in RAM, making disk access speed irrelevant after the files get loaded into RAM the first time they're used.

I think compilation time is generally limited by sub-process creation time/overhead for the 50+ processes involved in the build process. (which raises some interesting optimization possibilities.)

westfw:
I think compilation time is generally limited by sub-process creation time/overhead for the 50+ processes involved in the build process. (which raises some interesting optimization possibilities.)

That's true. Windows is well-known for having relatively long process start-up and tear-down times.

Windows 10 actually has a bug that is not present in Windows 8 and earlier in which process tear-down is not multithreaded inside the kernel, which defeats having multiple cores when you're spawning many processes at a time. See 24-core CPU and I can’t move my mouse | Random ASCII – tech blog of Bruce Dawson for an analysis of that particular problem.

Neat; I wasn't aware of "well known" issues.
The easiest initial fix is to replace the one-at-a-time additions to core.a that end up doing 25 invocations for avr-gcc-ar...
(you can clump actual compiles, too, but last time I looked the IDE walked the directory and made individual decisions based on the type of each file. By the time you're building core.a, it would be easy to have a list of the .o files...)