
"The update means that open-source large language models - including Kimi K2, DeepSeek V3.1, GLM 4.5, and others - can now be accessed and tested from inside the VS Code editor, without the need to switch platforms or juggle multiple tools. The workflow is designed to be simple. Developers install the Hugging Face Copilot Chat extension, open VS Code's chat interface, select the Hugging Face provider, enter their Hugging Face token, and then add the models they want to use."
"One practical note quickly surfaced in community discussions: the feature requires an up-to-date version of the editor. As AI researcher Aditya Wresniyandaka highlighted on LinkedIn: The document forgot to mention you need VS Code August 2025 version 1.104.0 GitHub Copilot Chat has traditionally relied on a closed set of proprietary models. By linking it with Hugging Face's network of Inference Providers, developers gain access to a much broader set of AI tools, including experimental and highly specialized open models."
Hugging Face enabled direct connections from Inference Providers to GitHub Copilot Chat inside Visual Studio Code, allowing open-source large language models such as Kimi K2, DeepSeek V3.1, and GLM 4.5 to run in the editor. Developers install the Hugging Face Copilot Chat extension, open VS Code's chat, select the provider, enter a Hugging Face token, and add models to test and switch via the model picker. The feature requires an up-to-date VS Code (August 2025 v1.104.0). The integration expands model choice beyond proprietary defaults, enabling access to experimental and specialized models. Hugging Face Inference Providers offer unified API access to hundreds of models, simplifying reliability and tooling management.
Read at InfoQ
Unable to calculate read time
Collection
[
|
...
]