
"According to a release from OpenAI, the dollars heading to AWS from the organization, whose annual revenues are projected to be a mere $13 billion, will be used to "run and scale" its AI workloads, beginning immediately. The release went on to say that AWS will provide the company with "Amazon EC2 UltraServers, featuring hundreds of thousands of chips, and the ability to scale to tens of millions of CPUs for its advanced generative AI workloads.""
""OpenAI's cloud investments have reached a level that no longer correlates with its revenue profile. Despite claims of scaling for future demand, these multi-cloud arrangements are best understood as a form of insurance rather than expansion." The business case here, he said, is not grounded in present usage or profitability, but rather in the anticipation of massive, yet unproven, growth. "OpenAI is distributing its infrastructure load across AWS, Microsoft, Oracle, and Google,""
OpenAI signed a multi-year $38 billion contract with AWS to run and scale its AI workloads beginning immediately. The commitment follows a recent $250 billion pledge for Azure services and investments in Stargate data centers with Oracle and other partners. OpenAI's projected annual revenues are about $13 billion, creating a large gap between spending commitments and revenue. AWS will supply Amazon EC2 UltraServers with hundreds of thousands of chips and the ability to scale to tens of millions of CPUs for generative AI workloads. Analysts describe these multi-cloud arrangements as insurance to ensure continuity rather than cost-efficient optimization, relying on speculative revenue forecasts.
 Read at Computerworld
Unable to calculate read time
 Collection 
[
|
 ... 
]