ccache folder size started becoming huge. And it just didn’t speed up the project builds, I don’t remember the details of why.
That’s highly unusual, and suggests you misconfigured your project to actually not cache your builds, and instead it just gathered precompiled binaries that it could not reuse due to being misconfigured.
When I tried it I was working on a 100+ devs C++ project, 3/4M LOC, about as big as they come.
That’s not necessarily a problem. I worked on C++ projects which were the similar size and ccache just worked. It has more to do with how you’re project is set, and misconfigurations.
Compilation of everything from scratch was an hour at the end.
That fits my usecase as well. End-to-end builds took slightly longer than 1h, but after onboarding ccache the same end-to-end builds would take less than 2 minutes. Incremental builds were virtually instant.
Switching to lld was a huge win, as well as going from 12 to compilation 24 threads.
That’s perfectly fine. Ccache acts before linking, and naturally being able to run more parallel tasks can indeed help, regardless of ccache being in place.
Surprisingly, ccache works even better in this scenario. With ccache, the bottleneck of any build task switches from the CPU/Memory to IO. This had the nice trait that it was now possible to overcommit the number of jobs as the processor was no longer being maxed out. In my case it was possible to run around 40% more build jobs than physical threads to get a CPU utilization rate above 80%.
I was a linux dev there, the pch’s worked, (…)
I dare say ccache was not caching what it could due to precompiled headers. If you really want those, you need to configure ccache to tolerate them. Nevertheless it’s a tad pointless to have pch in a project for performance reasons when you can have a proper compiler cache.
That’s highly unusual, and suggests you misconfigured your project to actually not cache your builds, and instead it just gathered precompiled binaries that it could not reuse due to being misconfigured.
That’s not necessarily a problem. I worked on C++ projects which were the similar size and ccache just worked. It has more to do with how you’re project is set, and misconfigurations.
That fits my usecase as well. End-to-end builds took slightly longer than 1h, but after onboarding ccache the same end-to-end builds would take less than 2 minutes. Incremental builds were virtually instant.
That’s perfectly fine. Ccache acts before linking, and naturally being able to run more parallel tasks can indeed help, regardless of ccache being in place.
Surprisingly, ccache works even better in this scenario. With ccache, the bottleneck of any build task switches from the CPU/Memory to IO. This had the nice trait that it was now possible to overcommit the number of jobs as the processor was no longer being maxed out. In my case it was possible to run around 40% more build jobs than physical threads to get a CPU utilization rate above 80%.
I dare say ccache was not caching what it could due to precompiled headers. If you really want those, you need to configure ccache to tolerate them. Nevertheless it’s a tad pointless to have pch in a project for performance reasons when you can have a proper compiler cache.