AI is rewriting how power flows through the datacenter
Briefly

AI is rewriting how power flows through the datacenter
"The AI craze has seen datacenter operators stuffing servers with as many accelerators (typically GPUs) as will fit inside, then filling the racks in their data halls with these high-performance boxes. With some GPUs now using 700 W of energy each, the power required by an entire rack or datacenter of kit is growing rapidly, putting a strain on electrical distribution infrastructure."
"Last year, Nvidia introduced an integrated rack-scale system for AI processing, the DGX GB200 NVL72, which needs 120 kW per unit, while cloud giant Google disclosed this year that it is planning for datacenter racks supporting 1 MW of IT hardware loads. To cap it all, some of the key infrastructure components such as transformers are reported to be holding things up through long lead times of 28 weeks or more."
Datacenter racks and facilities are experiencing rapidly increasing power demand as operators maximize accelerator (GPU) density. Some GPUs now consume 700 W each, driving rack and facility loads into the hundreds of kilowatts and up to megawatt scales. Integrated rack systems can require 120 kW per unit, with designs planning for 1 MW per rack. Legacy infrastructure components such as transformers face lead times of 28 weeks or more, constraining deployment. Solid-state transformers provide much smaller, lighter, and more efficient power conversion by operating at much higher frequencies than AC mains. Those SSTs require silicon carbide semiconductors to enable high-frequency switching and blocking voltages of two to three kilovolts.
Read at Theregister
Unable to calculate read time
[
|
]