Hugging Face has launched Inference Providers, allowing developers to deploy AI models using preferred third-party cloud vendors like SambaNova. This marks a shift from Hugging Face's traditional in-house solutions to a more collaborative approach focused on storage and model distribution. By partnering with several cloud providers, developers can easily access various infrastructures to run models without needing to manage underlying hardware. Hugging Face aims to simplify and unify serverless inference, aiming for a smoother developer experience.
Serverless providers have flourished, and the time was right for Hugging Face to offer easy and unified access to serverless inference through a set of great providers.
Hugging Face explained that its focus has shifted to collaboration, storage, and model distribution capabilities, moving away from solely offering in-house solutions.
Collection
[
|
...
]