ccache is awesome until it isn't.
IF you know it's there, you know it's not perfect, and know there's a secret layer of voodoo between your source code and what your device sees in the executable, it can be awesome. If any one of those systems breaks down, you can spend a lot of nerd points in your life (and you only get just so many....) before you even know where to LOOK for a problem, let alone finding one.
The day you find your group silently rolled out a ccache environment that's on a machine you can't see, but that has error messages redirected to null AND has a failed/failing/full disk can be a very bad day indeed. It begins with "I changed X but Y didn't change, escalates to "I compiled the objects to source via -S -o foo.o ... and get a different result when I view the source vs when I objdump --disassemble the object" and, finally, "I made the first line of main() into an abort(), but this program is still running, WTH?" can be a very long day. I've encountered all these progressions. I encountered two of them today.
Without inside knowledge on this specific case, I'd venture that
THIS, not some secret desire to make your builds slow, is why tools vendors generally don't default aggressive write caching to ON. It's better to be slower and deterministic than faster and involving another full layer between the human/source code creator and the executor. When it works 100%, it's great, but when it's wrong, it doesn't matter how quickly your code
doesn't work.
Using ccache is awesome. Debugging ccache is sort of terrible. Debugging ccache when you don't know it's involved in the process is totally terrible.
Of COURSE, if your build is on Windows and you're spending a couple of cores on preventing strangers from using your computer to send
malware to the entire world, you deserve what you got.