Artificial intelligencefromInfoQ1 day agoGoogle's New LiteRT Accelerator Supercharges AI Workloads on Snapdragon-powered Android DevicesQNN enables up to 100x CPU and 10x GPU speedups by delegating LiteRT models to NPUs on Snapdragon 8 devices, improving latency and power efficiency.
Artificial intelligencefromInfoQ6 months agoGoogle Enhances LiteRT for Faster On-Device InferenceLiteRT simplifies on-device ML inference with enhanced GPU and NPU support for faster performance and lower power consumption.