I had a feeling that kernel compilation got slower recently and tried to find the slowest file across randconfig builds. It turned out to be arch/x86/xen/setup.c, which takes 15 seconds to preprocess on a reasonably fast Apple M1 Ultra.
This all comes from one line "extra_pages = min3(EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)), extra_pages, max_pages - max_pfn);" that expands to 47MB of preprocessor output after commits 80fcac55385c..867046cc70277.
@arnd But ... how? Even if it expanded to 47 kilobytes, that would be excessive, but 47 *megabytes*?
@KeyJ it nests min() multiple levels deep with the use of min3(), and each one expands its argument 20 times times now (up from 6 back in linux-6.6). This gets 8000 expansions for each of the arguments, plus a lot of extra bits with each expansion. PFN_DOWN(MAXMEM) contributes a bit to the initial size as well.
See https://pastebin.com/MmfWH7TM for the first few pages of it.
@dirksteins @arnd @KeyJ It's a huge constant expression that gets evaluated by the compiler. The final code is (probably) fine.
@dirksteins @arnd @KeyJ There's no reason to doubt the correctness, but as has been noted, it is somewhat slow.
Now as shocking as this might be, it is quite tame compared to the boost C++ library.
I'm guessing that once the 47MB of preprocessor output is fed to the actual compiler, a lot of it will turn out to be semantically null or trivially redundant, and some really basic optimisations like hoisting and common subexpression elimination will collapse the rest back down to a handful of assembly instructions.