Why Apple is taking a small-model approach to generative AI
Among the biggest questions surrounding models like ChatGPT, Gemini and Midjourney since launch is what role (if any) theyâll play in our daily lives. Itâs something Apple is striving to answer with its own take on the category, Apple Intelligence, which was officially unveiled this week at WWDC 2024.
The company led with flash at Mondayâs presentation; thatâs just how keynotes work. When SVP Craig Federighi wasnât skydiving or performing parkour with the aid of some Hollywood (well, Cupertino) magic, Apple was determined to demonstrate that its in-house models were every bit as capable as the competitionâs.
The jury is still out on that question, with the betas having only dropped Monday, but the company has since revealed some of what makes its approach to generative AI different. First and foremost is scope. Many of the most prominent companies in the space take a âbigger is betterâ approach to their models. The goal of these systems is to serve as a kind of one-stop shop to the worldâs information.
Appleâs approach to the category, on the other hand, is grounded in something more pragmatic. Apple Intelligence is a more bespoke approach to generative AI, built specifically with the companyâs different operating systems at their foundation. Itâs a very Apple approach in the sense that it prioritizes a frictionless user experience above all.
Apple Intelligence is a branding exercise in one sense, but in another, the company prefers the generative AI aspects to seamlessly blend into the operating system. Itâs completely fine â or even preferred, really â if the user has no concept of the underlying technologies that power these systems. Thatâs how Apple products have always worked.
Keeping the models small
The key to much of this is creating smaller models: training the systems on a customized dataset designed specifically for the kinds of functionality required by users of its operating systems. Itâs not immediately clear how much the size of these models will affect the black box issue, but Apple thinks that, at the very least, having more topic-specific models will increase the transparency around why the system makes specific decisions.
Due to the relatively limited nature of these models, Apple doesnât expect that there will be a huge amount of variety when prompting the system to, say, summarize text. Ultimately, however, the variation from prompt to prompt depends on the length of the text being summarized. The operating systems also feature a feedback mechanism into which users can report issues with the generative AI system.
While Apple Intelligence is much more focused than larger models, it can cover a spectrum of requests, thanks to the inclusion of âadapters,â which are specialized for different tasks and styles. Broadly, however, Appleâs is not a âbigger is betterâ approach to creating models, as things like size, speed and compute power need to be taken into account â particularly when dealing with on-device models.
ChatGPT, Gemini and the rest
Opening up to third-party models like OpenAIâs ChatGPT makes sense when considering the limited focus of Appleâs models. The company trained its systems specifically for the macOS/iOS experience, so thereâs going to be plenty of information that is out of its scope. In cases where the system thinks a third-party application would be better suited to provide a response, a system prompt will ask whether you want to share that information externally. If you donât receive a prompt like this, the request is being processed with Appleâs in-house models.
This should function the same with all external models Apple partners with, including Google Gemini. Itâs one of the rare instances where the system will draw attention to its use of generative AI in this way. The decision was made, in part, to squash any privacy concerns. Every company has different standards when it comes to collecting and training on user data.
Requiring users to opt-in each time removes some of the onus from Apple, even if it does add some friction into the process. You can also opt-out of using third-party platforms systemwide, though doing so would limit the amount of data the operating system/Siri can access. You cannot, however, opt-out of Apple Intelligence in one fell swoop. Instead, you will have to do so on a feature by feature basis.
Private Cloud Compute
Whether the system processes a specific query on device or via a remote server with Private Cloud Compute, on the other hand, will not be made clear. Appleâs philosophy is that such disclosures arenât necessary, since it holds its servers to the same privacy standards as its devices, down to the first-party silicon they run on.
One way to know for certain whether the query is being managed on- or off-device is to disconnect your machine from the internet. If the problem requires cloud computing to solve, but the machine canât find a network, it will throw up an error noting that it cannot complete the requested action.
Apple is breaking down the specifics surrounding which actions will require cloud-based processing. There are several factors at play there, and the ever-changing nature of these systems means something that could require cloud compute today might be able to be accomplished on-device tomorrow. On-device computing wonât always be the faster option, as speed is one of the parameters Apple Intelligence factors in when determining where to process the prompt.
There are, however, certain operations that will always be performed on-device. The most notable of the bunch is Image Playground, as the full diffusion model is stored locally. Apple tweaked the model so it generates images in three different house styles: animation, illustration and sketch. The animation style looks a good bit like the house style of another Steve Jobs-founded company. Similarly, text generation is currently available in a trio of styles: friendly, professional and concise.
Even at this early beta stage, Image Playgroundâs generation is impressively quick, often only taking a couple of seconds. As for the question of inclusion when generating images of people, the system requires you to input specifics, rather than simply guessing at things like ethnicity.
How Apple will handle datasets
Appleâs models are trained on a combination of licensed datasets and by crawling publicly accessible information. The latter is accomplished with AppleBot. The companyâs web crawler has been around for some time now, providing contextual data to applications like Spotlight, Siri and Safari. The crawler has an existing opt-out feature for publishers.
âWith Applebot-Extended,â Apple notes, âweb publishers can choose to opt out of their website content being used to train Appleâs foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools.â
This is accomplished with the inclusion of a prompt within the websiteâs code. With the advent of Apple Intelligence, the company has introduced a second prompt, which allows sites to be included in search results but excluded for generative AI model training.
Responsible AI
Apple released a whitepaper on the first day of WWDC titled, âIntroducing Appleâs On-Device and Server Foundation Models.â Among other things, it highlights principles governing the companyâs AI models. In particular, Apple highlights four things:
- âEmpower users with intelligent tools: We identify areas where AI can be used responsibly to create tools for addressing specific user needs. We respect how our users choose to use these tools to accomplish their goals.â
- âRepresent our users: We build deeply personal products with the goal of representing users around the globe authentically. We work continuously to avoid perpetuating stereotypes and systemic biases across our AI tools and models.â
- âDesign with care: We take precautions at every stage of our process, including design, model training, feature development, and quality evaluation to identify how our AI tools may be misused or lead to potential harm. We will continuously and proactively improve our AI tools with the help of user feedback.â
- âProtect privacy: We protect our usersâ privacy with powerful on-device processing and groundbreaking infrastructure like Private Cloud Compute. We do not use our usersâ private personal data or user interactions when training our foundation models.â
Appleâs bespoke approach to foundational models allows the system to be tailored specifically to the user experience. The company has applied this UX-first approach since the arrival of the first Mac. Providing as frictionless an experience as possible serves the user, but it should not be done at the expense of privacy.
This is going to be a difficult balancing act the company will have to navigate as the current crop of OS betas reach general availability this year. The ideal approach is to offer up as much â or little â information as the end user requires. Certainly there will be plenty of people who donât care, say, whether or not a query is executed on-machine or in the cloud. Theyâre content to have the system default to whatever is the most accurate and efficient.
For privacy advocates and others who are interested in those specifics, Apple should strive for as much user transparency as possible â not to mention transparency for publishers that might prefer not to have their content sourced to train these models. There are certain aspects with which the black box problem is currently unavoidable, but in cases where transparency can be offered, it should be made available upon usersâ request.