Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.
> El Capitan – we don’t yet know how big of a portion yet as we write this – with 43,808 of AMD’s “Antares-A” Instinct MI300A devices
By comparison XAI announced that they have 100k H100s. MI300A and H100s have roughly similar performance. Meta says they're training on more than 100k H100s for Llama-4, and have the equivalent of 600k H100s worth of compute. (Note that compute and networking can be orthogonal).
Also, Nvidia B200s are rolling out now. They offer 2-3x the performance of H100s.
Without blindingly fast, otherwise blinding numerical performance dims quite a lot. This is why the Cerebras numbers on heavy numerical problems are competitive up to a pretty severe ceiling. Below that point, their on wafer interconnects suffice, above it they cannot scale the data communications bandwidth necessary.
If you look at the table toward the bottom, no matter how you slice it, Nvidia has 50% of the total cores, 50% of the total flops, and 90% of the total systems among the Top 500, while AMD has 26% of the total cores, 27.5% of the total flops, and 7% of the total systems.
Is it a matter of newly-added compute?
> This time around, on the November 2024 Top500 rankings, AMD is the big winner in terms of adding capacity to the HPC base.