Apple has been investing heavily in artificial intelligence technologies in its research material in recent months, as can be seen from much of the material it’s published. As part of iOS 18 and other new operating systems, Apple will announce its AI strategy at WWDC in June.
Apple has developed an offline, on-device, large language model to power the new iPhone AI features, according to Mark Gurman in the latest Power On newsletter. A privacy and speed benefit will be touted by Apple.
In iOS 17.4, 9to5Mac previously found references to a device model called “Ajax”. Server-hosted versions of Ajax are also being developed by Apple.
LLMs running on devices cannot be as powerful as those running on huge server farms with tens of billions of parameters and continually updating data.
Apple engineers can make the most out of an on-device approach by integrating its platforms vertically, with software tailored to the Apple silicon chips inside its devices. As well as being quicker to respond, on-device models have the advantage of being able to work offline in locations without or limited connectivity.
In spite of the fact that on-device LLMs may not have a rich database of knowledge embedded like ChatGPT, they can be tuned to perform a wide range of tasks quite well. For instance, an on-device LLM could generate sophisticated autoresponders to Messages, or interpret Siri requests more accurately.
Moreover, Apple’s strict privacy policies are aligned perfectly with this. The data stays local if you churn your downloaded emails and text messages through an on-device model.
Models running on mobile devices could also create documents or images based on prompts, with decent results. In addition, Apple can still fall back to Gemini on the server for certain tasks by partnering with a company like Google.