
"Arm has lifted the lid on its latest mobile platform, comprising new CPU and GPU designs plus rearchitected interconnect and memory management logic, all optimized with a coming wave of AI-enabled smartphones in mind. The UK-based chip designer has been moving towards more integrated solutions rather than just offering cores for several years, and this year's Lumex compute subsystem (CSS) is the latest evolution of that philosophy."
"As with each generation, Arm manages to squeeze more performance and power efficiency out of its designs, claiming an average 15 percent step up in CPU and 20 percent in GPU, while saving 15 percent on power. The key focus with Lumex is on Arm's SME2 Scalable Matrix Extensions in the CPU cluster, which the firm is pushing as the preferred route for AI acceleration, and overall system-level optimizations to boost scalability for devices capable of running AI models."
"Stefan Rosinger, Senior Director for CPUs, said that SME2 gives AI acceleration "an order of magnitude better" than before, and its advantage for a mobile device is that it uses less power and finishes calculations quicker. According to Arm, we can expect to see Lumex implemented in smartphones and other devices later this year or early next year. It has been crafted with 3nm manufacturing in mind,"
Lumex is a mobile compute subsystem combining new CPU and GPU designs, rearchitected interconnect and memory management, optimized for AI-enabled smartphones. Arm claims average CPU performance increases of 15% and GPU gains of 20% alongside 15% lower power consumption. The CPU cluster emphasizes SME2 Scalable Matrix Extensions for AI acceleration, delivering significantly better throughput and lower energy use for on-device models. Lumex targets 3nm processes and licensee chips are expected to exceed 4 GHz clock speeds. Designers can choose four C1 core variants — C1-Ultra, C1-Premium, C1-Pro and C1-Nano — to balance performance and power.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]