Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ Speeding up C++ build times
kjksf 15 daysReload
I wrote about how I keep build times sane in SumatraPDF at https://blog.kowalczyk.info/article/96a4706ec8e44bc4b0bafda2...

The idea is the same: reduce the duplicate parsing of .h files.

I don't use any tools, just a hard-core discipline of only #include'ing .h in .cpp files.

The problem is that if you start #include'ing .h in .h, you quickly start introducing duplication that is intractable, for a human, to avoid.

On another note: C++ compiler should by default keep statistics about the chain of #include's / parsing during compilation and dump it to a file at the end and also summarize how badly you're re-parsing the same .h files during build.

That info would help people remove redundant #include's.

But of course even if they do have such options, you have to turn on some flags and they'll spam your build output instead of writing to a file.


snypehype46 15 daysReload
Coincidentally in the project I'm currently working I managed to reduce our compile times significantly (~35% faster) using ClangBuildAnalyzer [1]. The main two things that helped were precompiled headers and explicit template instantiations.

Unfortunately, the project still remains heavy to compile because of our use of Eigen throughout the entire codebase. The analysis with Clang's "-ftime-trace" show that 75-80% of the compilation time is spent in the optimisation stage, but not really sure what to do about that.

[1] https://github.com/aras-p/ClangBuildAnalyzer


petermcneeley 15 daysReload
The video game industry uses bulk builds (master files) which groups all .cc s into very large single .cc files. The speedups here are like 5-10x at least. These bulk files are sent to other developers machines with possible caching. The result is 12 min builds instead of 6 hours.

andersa 15 daysReload
It's frustrating to see the C++ committee spend year after year on pointless new over engineered libraries instead of finally fixing the compile times. On a high level view, with only one change to the language, we could entirely eliminate this problem!

Consider the following theoretically simple change:

A definition in a file may not affect headers included after it. If you want global configuration, define them at the project level, or in a header included by all files that need it.

i.e. we need to break this construct:

    #define MY_CONFIG=1
    #include "header_using_MY_CONFIG.h"
Thats really all we need to do to completely eliminate the nonsense that is constant re-parsing of headers and turn the build process into a task graph where each file is processed exactly one time, and each template is instantiated exactly one time, the intermediate outputs of which can be fully cached.

Most real-world large projects already practice IWYU meaning they are already fully compatible with this.

There are some videos by Jonathan Blow on how this is exactly how the Jai compiler is so fast. Why must we still suffer with these outdated design decisions from 50 years ago in C++? Why can't the tech evolve?

/end rant


diath 15 daysReload
Ever since I tried -ftime-trace in Clang to improve build times in a project a while ago, I've been very conscious about using forward declarations wherever possible, however I wish we had proper module support that actually worked well, having to keep this in mind whenever writing new code just so your project doesn't take forever to compile sucks, this shouldn't even be something we have to keep in mind in 2024.