When it comes to artificial intelligence, a few names dominate the conversation like Nvidia ( NASDAQ:NVDA ), Taiwan Semiconductor Manufacturing ( ), or even Intel ( NASDAQ:INTC ) in recent months. These companies rightfully claim the spotlight. These players drive the AI narrative because they deliver tangible results - record revenues, market share gains, and innovations that fuel everything from chatbots to autonomous systems. Investors flock to them, bidding up shares on every earnings beat or product launch. Yet beneath the hype, AI's foundation relies on more than just processing power and fabrication prowess. Data storage and high-speed memory are the unsung necessities that enable seamless data flow , preventing bottlenecks in the AI pipeline.
PC memory prices are set to rise as the major suppliers allocate manufacturing capacity to the more lucrative server DRAM and HBM instead amid reports of tightening supplies. Memory prices are set for an increase in Q4 of 2025, according to market watcher TrendForce, which points the finger at the three top DRAM makers - Samsung, SK Hynix, and Micron Technology.
Nvidia on Tuesday unveiled the Rubin CPX, a GPU designed specifically to accelerate extremely long-context AI workflows like those seen in code assistants such as Microsoft's GitHub Copilot, while simultaneously cutting back on pricey and power-hungry high-bandwidth memory (HBM). The first indication that Nvidia might be moving in this direction came when CEO Jensen Huang unveiled Dynamo during his GTC keynote in spring. The framework brought mainstream attention to the idea of disaggregated inference.