
"Nvidia has licensed intellectual property from inferencing chip designer Groq, and hired away some of its senior executives, but stopped short of an outright acquisition. "We've taken a non-exclusive license to Groq's IP and have hired engineering talent from Groq's team to join us in our mission to provide world-leading accelerated computing technology," an Nvidia spokesman said Tuesday, via email. But, he said, "We haven't acquired Groq.""
"Groq designs and sells chips optimized for AI inferencing. These chips, which Groq calls language processing units (LPUs), are lower-powered, lower-priced devices than the GPUs Nvidia designs and sells, which these days are primarily used for training AI models. As the AI market matures, and usage shifts from the creation of AI tools to their use, demand for devices optimized for inferencing is likely to grow."
Nvidia obtained a non-exclusive license to Groq's intellectual property and recruited engineering talent from Groq but did not acquire the company. Groq produces chips optimized for AI inferencing, branding them as language processing units (LPUs). LPUs are designed to be lower-powered and lower-priced compared with Nvidia's GPUs. Nvidia's GPUs are primarily used for training AI models, whereas LPUs target inference workloads. As AI adoption shifts toward deploying and using models rather than creating them, the market for inferencing-optimized devices is expected to expand. The arrangement combines licensed IP access with direct hiring of Groq engineers.
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]