OpenAI turns to Broadcom for 10GW of custom accelerators
Briefly

OpenAI turns to Broadcom for 10GW of custom accelerators
"By designing its own chips and systems, OpenAI can embed what it's learned from developing frontier models and products directly into the hardware, unlocking new levels of capability and intelligence. The racks will be "scaled entirely with Ethernet and other connectivity solutions from Broadcom... with deployments across OpenAI's facilities and partner datacenters.""
"Altman said the agreement covered a full system, apparently geared towards inference. He added that it turned out Broadcom was also "incredible" at designing systems and 10 GW was an astonishing capacity on top of what it's already building. The GPUs of today were amazing but with the combination of model, chip, and rack, "we will be able to wring out so much more intelligence per watt," he continued."
OpenAI and Broadcom are collaborating on custom silicon and full system racks totaling 10 GW of AI accelerators, with deployment beginning late next year. The design embeds learnings from advanced models directly into hardware to increase capability and intelligence and appears focused on inference workloads. Racks will use Broadcom Ethernet, PCIe, and optical connectivity and will be deployed across OpenAI facilities and partner datacenters. Executives emphasize that combined model, chip, and rack design will improve intelligence per watt and expand global capacity to support advanced AI usage.
Read at Theregister
Unable to calculate read time
[
|
]