
"Against the idea of the bubble is very simple. It's this idea that we were talking about earlier, which this is basically a winner-takes-all strategy, and if everyone develops the products that they say they're going to develop, if they develop A.I. models or systems that magically solve every business or person's problems in the entire world, then perhaps you can justify some of those valuations. It's not a bubble if magic occurs."
"So I think there's a concern as A.I. becomes an even more dominant force in the U.S. economy, if the bubble bursts, or even if the promised revenue and savings doesn't materialize to the scale that people think it's going to then you're going to have an economic impact that potentially feeds on itself, which would be similar to what we saw again, not to be too pessimistic but similar to what we saw back in the run up to the great financial crisis."
AI companies are promising near-magical solutions and pricing themselves on a winner-takes-all assumption. If widely adopted models deliver transformative revenue gains and cost savings at scale, high valuations could be justified. If those promised savings and revenues fail to materialize, concentrated overinvestment in AI could create feedback effects that amplify economic pain. A sector that becomes large enough can itself become the proximate source of broader economic problems, as housing did before the financial crisis. Observing the scale of actual productivity gains and the dependence on projected savings is critical to assessing systemic risk.
Read at www.nytimes.com
Unable to calculate read time
Collection
[
|
...
]