- El Capitan is a categorized US authorities property that crunches information associated to US nuclear arsenal
- ServeTheHome’s Patrick Kennedy was invited for the launch on the LLNL in California
- AMD and HPE’s CEOs have been additionally a part of the ceremony
In November 2024, the AMD-powered El Capitan formally turned the world’s quickest supercomputer, delivering a peak efficiency of two.7 exaflops and 1.7 exaflops of sustained efficiency.
Constructed by HPE for the Nationwide Nuclear Safety Administration (NNSA) on the Lawrence Livermore Nationwide Laboratory (LLNL) to simulate nuclear weapons exams, it’s powered by AMD Intuition MI300A APUs and dethroned the earlier chief, Frontier, pushing it right down to second place among the many strongest supercomputers on this planet.
Patrick Kennedy from ServeTheHome was just lately invited to the launch occasion at LLNL in California, which additionally included the CEOs of AMD and HPE, and was allowed to convey alongside his telephone to seize “some photographs earlier than El Capitan will get to its categorized mission.”
Not the largest
Through the tour, Kennedy noticed, “Every rack has 128 compute blades which might be fully liquid-cooled. It was very quiet on this method, with extra noise coming from the storage and different methods on the ground.”
He then famous, “On the opposite facet of the racks, now we have the HPE Slingshot interconnect cabled with each DACs and optics.”
The Slingshot interconnect facet of El Capitan is – as you’d count on – liquid-cooled, with change trays occupying solely the underside half of the house. LLNL defined to Kennedy that their codes do not require full inhabitants, leaving the highest half for the “Rabbit,” a liquid-cooled unit housing 18 NVMe SSDs.
Trying contained in the system, Kennedy noticed “a CPU that appears like an AMD EPYC 7003 Milan half, which feels about proper given the AMD MI300A’s technology. In contrast to the APU, the Rabbit’s CPU had DIMMs and what seems to be like DDR4 reminiscence that’s liquid-cooled. Like the usual blades, the whole lot is liquid-cooled, so there are usually not any followers within the system.”
Whereas El Capitan is lower than half the scale the xAI Colossus cluster was in September when Elon Musk’s supercomputer was geared up with “simply” 100,000 Nvidia H100 GPUs (plans are afoot to develop it to one million GPUs), Kennedy factors out that “methods like this are nonetheless large and are carried out on a fraction of the finances of a 100,000 plus GPU system.”