
"The data center as it exists today is a relic of its telecom roots. There's a real estate model built around leased space, centralized air conditioning, and an infrastructure used to racks drawing 10 to 40 kilowatts. That model still works adequately for standard enterprise IT. For AI, it does not."
"Nvidia's Vera Rubin systems will demand up to 600 kW per rack. This type of wattage brings data center cooling demands per unit volume to that of internal combustion engines, which only have that kind of output intermittently."
"The solution is not just to upgrade the cooling. An end-to-end infrastructure renewal is required, necessitating changes from the electrical grid down to the silicon chips themselves."
"Cooling needs to be done with liquid water rather than air, with a direct copper heat exchanger next to the AI accelerators to maximize efficiency, fully integrated into the plumbing of modern data centers."
The data center industry faces challenges due to AI's increasing power demands, which exceed traditional infrastructure designed for telecom roots. Current models, based on leased space and centralized air conditioning, are inadequate for AI's requirements. AI infrastructure can demand up to 600 kW per rack, necessitating a shift to liquid cooling solutions and an end-to-end infrastructure renewal. This includes integrating cooling systems with power delivery and the electrical grid to ensure efficiency and meet the constant operational needs of AI technologies.
Read at Techzine Global
Unable to calculate read time
Collection
[
|
...
]