Talk: Apple Foundation Models
This presentation looks at Apple’s foundation models - the on-device language models built into Apple Intelligence.
The setup: cloud-based LLMs work well but come with tradeoffs. Latency, API costs, and the reality that your data travels to someone else’s servers. Apple’s approach puts the model on the device itself.
The practical benefits are straightforward. Sub-second responses because there’s no network round-trip. No per-request fees. Works offline. And for applications handling sensitive information, the data never leaves the device.
For developers, these models integrate through Xcode and SwiftUI. They also work alongside MLX, so you can combine Apple’s foundation models with your own specialized models.
The argument I make in this talk: on-device processing isn’t just a privacy feature. It’s becoming a competitive advantage. As models get more efficient and hardware improves, running locally will be the default for many applications.