I think this is an example of sound strategic thinking by Apple. Instead of attempting to create a general-purpose LLM that outperforms all others, Apple has asked: "What can we uniquely do that no one else can?" The answer lies in their OS-level access to user data, enabling the development of on-device, private models. This approach aligns perfectly with Apple's existing strengths, including their in-house silicon development. By optimizing both chips and models to work seamlessly together, Apple ensures a superior, integrated user experience. Additionally, it appears unlikely that any single LLM provider will dominate in terms of model quality or performance. While OpenAI's models are currently the best, they do not exhibit the same level of differentiation that iPhones do compared to Android devices. Apple of course has the capital to develop an LLM in the meanwhile if they think this is a material risk. If the ultimate winner is the player that commoditizes its complements, Apple is well-positioned to commoditize LLM providers rather than being commoditized by them.
I wonder how much Apple AI is going to cause people to underestimate LLMs They are really building a better Siri, which is neat, but a 3B quantized model running on local silicon will be good for some stuff but not agentic reasoning. The cloud model doesn’t seem to maintain state or history and isn’t frontier. It looks like GPT-4 is just for specific questions. It is amazing that you can beat Siri largely with local AI on the phone, but the gap between a 3B model (plus whatever low latency model they are running in the cloud) and a frontier model is large. It is a completely sensible move, but also one that is pretty conservative.
Machine Learning Engineer@Adobe | Ex- Comcast | AI/ML | Gen AI | LLM | NLP | Transformers| Deep Learning
6moVery well written!👍🏻