
"Developers on Apple platforms often face a fragmented ecosystem when using language models. Local models via Core ML or MLX offer privacy and offline capabilities, while cloud services like OpenAI, Anthropic, or Google Gemini provide advanced features. AnyLanguageModel, a new Swift package, simplifies integration by offering a unified API for both local and remote models. AnyLanguageModel works seamlessly with Apple's Foundation Models framework, letting developers switch with minimal code changes while keeping session and response structures intact."
"Most apps use some mix of local & remote models from different providers, and it's annoying to get them all to play nicely together. Apple's Foundation Models provides a kind of 'public option' - a fallback built into all macOS and iOS devices. Since that's available only through Foundation Models, it made sense to target that as the API for supporting other providers, too."
Developers on Apple platforms face a fragmented ecosystem for language models, balancing local Core ML/MLX privacy and offline capabilities with advanced cloud services such as OpenAI, Anthropic, and Google Gemini. AnyLanguageModel is a Swift package that offers a unified API for both local and remote models, compatible with Apple's Foundation Models framework to enable minimal-code switching while keeping sessions and response structures consistent. The package supports Core ML, MLX, llama.cpp/llama.swift, Ollama-hosted models, and cloud APIs from OpenAI, Anthropic, Google Gemini, and Hugging Face, with Swift package traits to limit dependencies. AnyLanguageModel also supports vision-language prompts for image plus text interactions and continues development toward tool calling, structured outputs, and performance improvements.
Read at InfoQ
Unable to calculate read time
Collection
[
|
...
]