Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.
It’s not surprising they didn’t see a linear speedup from splitting into so many crates. The compiler now produces a large number of intermediate object files that must be read back and linked into the final binary. On top of that, rustc caches a significant amount of semantic information — lifetimes, trait resolutions, type inference — much of which now has to be recomputed for each crate, including dependencies. That introduces a lot of redundant work.
I also would expect this to hurt runtime performance as it likely reduces inlining opportunities (unless LTO is really good now?)
Amdahl’s Law would like to have a word.
> That’s right — 1,106 crates! Sounds excessive? Maybe. But in the end this is what makes rustc much more effective.
> What used to take 30–45 minutes now compiles in under 3 minutes.
I wonder if this kind of trick can be implemented in rustc itself in a more automated fashion to benefit more projects.
- in rust one semantic compilation unit is one crate
- in C one semantic compilation unit is one file
There are quite a bunch of benefits in the rust approach, but also drawbacks, like huge projects have to be split into multiple workspaces to maximize parallel building.
Oversimplified the codegen-units setting tells the compiler into how many parts the compiler is allowed to split the a single semantic code gen unit.
Now it still seems strange (as in it looks like a performance bug) that most times rust was stuck in just one threat (instead of e.g. 8).
It will give you a workspace with a bunch of crates that seems to exercise some of the same bottlenecks the blog post described.