The paper, " Power Stabilization for AI Training Datacenters," argues that oscillating energy demand between the power-intensive GPU compute phase and the less-taxing communication phase, where parallelized GPU calculations get synchronized, represents a barrier to the development of AI models. The authors note that the difference in power consumption between the compute and communication phases is extreme, the former approaching the thermal limits of the GPU and the latter being close to idle time energy usage.
AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure, requiring enhanced access to power supplies of at least 500MW.
"With the increasing data throughput required in AI and other applications, there remains a strong demand for high performance. PCIe technology will continue to deliver a cost-effective, high-bandwidth, and low-latency I/O interconnect to meet industry needs."
Worldwide data center capacity will increase by 46% over the next two years, equivalent to a jump of almost 21,000 megawatts, as demand surges.
The AI Infrastructure directive addresses energy and permitting issues associated with data centers and the computational demands of running AI applications, requiring the Department of Energy to issue RFPs.
The growth in electricity demand from data centers is threatening the South's transition away from fossil fuels and toward clean energy. Current projections indicate an increase in reliance on fossil fuels, further raising carbon emissions.
AI data centers' carbon emissions are expected to surge 11-fold by 2030, with predictions that these centers will consume 612 terawatt-hours of electricity, driving a 3.4% increase in global carbon emissions.