
"The Maia 200 chip, which is being produced by Taiwan Semiconductor Manufacturing Co., is making its way to Microsoft data centers in Iowa, with deployments headed to the Phoenix area next. Microsoft invited developers on Monday to start using Maia's control software, but it's not clear when users of the company's Azure cloud service will be able to utilize servers running on the chip. Some of the first units will go to Microsoft's superintelligence team, generating data to improve the next generation of AI models, cloud and AI chief Scott Guthrie said in a blog post."
"The chips will also be used to power the Copilot assistant for businesses and AI models, including OpenAI's latest, that Microsoft rents to cloud customers. Microsoft's chip push started years after Amazon.com Inc. and Alphabet Inc.'s Google began designing their own chips. All three have a similar aim: cost-effective machines that can be seamlessly plugged into data centers and offer savings and other efficiencies to cloud customers. The high costs and short supply of the latest industry-leading chips from Nvidia has fueled a scramble to find alternative sources of computing power."
"Microsoft says its chip delivers better performance on some AI tasks than comparable semiconductors from Google and Amazon Web Services. Maia 200 is also the most efficient inference system Microsoft has ever deployed, Guthrie said, referring to the process of using AI models to generate responses to queries. The company says it's already designing the chip's successor, the Maia 300. Microsoft has other options, should its internal efforts falter: As part of a deal with close partner OpenAI, the company has access to the ChatGPT maker's nascent chip designs."
Microsoft is rolling out the Maia 200 AI chip, produced by Taiwan Semiconductor Manufacturing Co., to data centers in Iowa with further deployments planned for Phoenix. Developers have been invited to use Maia's control software, while Azure customer availability timing remains unclear. Early units will support a superintelligence team to generate data for improving future AI models. The chips will power Copilot for businesses and AI models rented to cloud customers. The effort targets cost-effective, plug‑in hardware alternatives to Nvidia, aims to improve inference efficiency, and the company is already designing a Maia 300 successor while retaining access to OpenAI's chip designs.
Read at www.bloomberg.com
Unable to calculate read time
Collection
[
|
...
]