Artificial intelligence has wowed me, but it was never something I wanted to use daily, especially because it required a cumbersome number of apps and subscriptions, depending on the task.
And then there was the privacy issue. Using AI even for scheduling or communicating with colleagues entailed blindly trusting that big, data-hungry AI companies wouldn’t abuse my sensitive personal info. I don’t trust most tech companies to put my privacy concerns above their profit concerns.
But then Apple entered the discussion. Yesterday, at its annual Worldwide Developers Conference (WWDC), Apple dove head-first into the AI race when it announced Apple Intelligence, its implementation of generative AI. Instead of just being another LLM chatbot or generative AI image creator, Apple Intelligence is a layer of AI that will be embedded across the company’s three main upcoming operating systems, iOS 18, iPadOS 18, and macOS Sequoia. This integration allows Apple’s AI to blend seamlessly into the background—it’s right there when you need it and invisible when you don’t.
But what’s most amazing about Apple Intelligence is how private it is. Apple Intelligence may get to know a lot about you, but Apple will know nothing, and Apple Intelligence is designed so that Apple never can. In other words, Apple Intelligence is the first generative AI platform designed from the ground up to protect your privacy. Hours after WWDC, I spoke to Apple’s senior vice president of software engineering, Craig Federighi, about these privacy protections and how AI will affect Apple, users, and the industry going forward.
Apple Intelligence is riding a truly “big wave”
Federighi oversees the development of one of the world’s biggest operating systems, iOS, as well as the all-important macOS operating system that powers Apple’s Macs. I wanted to get his take on AI, and just how entrenched, or not, it will be in our daily lives by the end of the decade.
“It’s a substantial transformative technology, in the same way the internet has been; in the same way mobility has been,” he says. “It’s one of the big waves, and it’s going to play out over many years,” not dissimilar to even older watershed tech waves, like microprocessor technology, which evolved over a long timeline. (At Apple, the rise of the microprocessor led to the Macintosh, the world’s first personal computer that was easy for the average person to use. The iMac made the internet accessible to everyone. The iPhone virtually defined the mobility wave as much as it was propelled by it.)
Now, with Apple Intelligence, the company is building a more friendly, personalized experience between AI and people, one that people find useful and easy to use and that also respects their privacy––something other generative AI systems and LLMs aren’t exactly known for.
“Data handling practices around different AI services and chatbots vary substantially, and some of the guarantees—if you can call them that—are limited,” Federighi tells me. He didn’t name names, but I think anyone would be hard-pressed to list an AI service or feature that people didn’t have privacy concerns about.
“We wanted to establish an entirely different bar,” Federighi says. “So we viewed it as foundational, and as a prerequisite to how we offered personal intelligence, that your personal information remained entirely yours and under your control. And no one, not even Apple, would have any visibility onto that information, even if our data center was processing your request.”
How Apple Intelligence maintains your privacy when other AI systems can’t
It takes a lot of raw processing power to run an AI or LLM—power most smartphones or PCs lack. To get around this, most AI companies run their AI on powerful servers in the cloud, where users must send their data and requests to be analyzed and computed.
But since Apple’s latest devices, including all Apple Silicon Macs and iPads, and the iPhone 15 Pro series, contain the world’s most advanced personal computer chips, they are powerful enough to run and process many AI tasks on the device itself. Users don’t have to send off their data to a remote server for processing, which in turn means their personal data never goes to Apple.
Of course, there are some cases where even the chips on the latest Macs and iPhones won’t have the power or speed to process a user’s AI request locally on device. In that case, the user’s request will be sent to Apple’s servers in the cloud. However, Apple’s implementation of server-side AI is radically different from that of other major AI players such as OpenAI and Google.
Called Private Cloud Compute (PCC), Apple’s implementation of server-side AI only sends the minimum information required from your device to the cloud for processing. It is impossible for Apple’s AI servers, which are custom-built by the company, to store data requests—the information is cryptographically destroyed after the processed request is returned to the user and is never seen by anyone at Apple. (Indeed, PCC was purposely designed to make it impossible for anyone at Apple to ever see your data—you can read a deep dive into all the security protections here).
But you don’t have to take Apple’s word on any of this, either. The company is giving independent security researchers access to its Apple Intelligence server implementation to verify that all of Apple’s privacy and security claims for PCC are accurate. For the privacy-minded, these security redundancies provide a level of trust and assurance previously unheard of in the AI space.
I ask Federighi if he ever thinks there will be a day when computer processors get so powerful, that a server-based technology like PCC won’t even be needed.
“I couldn’t rule it out,” he says, “…but even in that world, I think that you would expect that at times your device is going to, in servicing your request, reach out at least to knowledge stores that are outside the device.” For example, you’ll want to know if a restaurant’s opening times have changed—information the on-device LLM might not have. “So even in that future, I think there’s going to be a role for contacting external services.”
Inside the ChatGPT partnership
Federighi tells me that Apple Intelligence was designed as a personal intelligence, highly tailored to the user, harnessing your on-device data such as photos, contacts, messages, and emails to carry out personalized tasks unique to you. For example, I could instruct Apple Intelligence to “Send an email to my extended family reminding them that dad’s 90th birthday party begins at the restaurant at noon—and be sure to include directions to the restaurant” and it will do it for me—all without me needing to tell it who my family are or which restaurant I’m talking about (it knows that already from my on-device data).
Yet Federighi acknowledges that existing LLMs with vast knowledge of public information, such as ChatGPT, also have their use. “These very large frontier models have interesting capabilities that some users appreciate, and we saw that integration into our experiences could make [those capabilities] much more accessible than [they are] today.”
With that in mind, as part of its AI push, Apple is partnering with OpenAI to provide even more robust AI offerings across its platforms. However, it’s important to note that OpenAI’s ChatGPT doesn’t power Apple Intelligence. The two are completely separate. Apple Intelligence is powered exclusively by Apple’s LLMs and AI models.
Where OpenAI’s ChatGPT comes into play is when a user has a more complex AI request. Someone could use their Mac or iPhone to send a query to ChatGPT if they want to, say, have ChatGPT write a movie script for them.
And Apple has designed its ChatGPT integration with a privacy-first mindset, too. No user data is ever sent to OpenAI without the user’s permission. Before any request is sent to ChatGPT for processing, a user must first manually confirm that they want to do so. And though Federighi explains that Apple partnered with OpenAI because GPT-4o is currently the best LLM out there for broad world knowledge, Apple may partner with other LLM providers in the future, allowing users to bolt on the external LLM provider of their choice.
“For instance, if I’m a doctor, I might someday want to bring a medical model in; or if I’m a lawyer, I might have a model that’s refined for legal work that I want to bring that into my experience. We see that’s ultimately something very complimentary to what we’re doing with personal intelligence.”
The China factor
One thing Apple didn’t talk about publicly during its WWDC keynote surrounding Apple Intelligence was China—Apple’s second-largest market, and thus one of its most important. But China has much stricter regulations around artificial intelligence models––especially those made by non-Chinese companies. I asked Federighi if the iPhone maker would bring Apple Intelligence to the country.
“We certainly want to find a way to bring all of our best product capabilities to all of our customers,” he admitted, while acknowledging that “in some regions of the world, there are regulations that need to be worked through.” But Federighi confirmed that Apple has begun that process. “We don’t have timing to announce right now, but it’s certainly something we want to do.”
Apple, the world’s biggest AI company?
Apple users, industry watchers, and Wall Street have all been eagerly awaiting Apple’s official entry into the AI space. Many have said that Apple is late to the game—competitors Google and Microsoft had jumped in within months of ChatGPT taking the world by storm in late 2022.
But Apple has been using AI in its products for years (referring to the tech by another name: machine learning), and with the release of its privacy-focused Apple Intelligence––now available to developers in beta form and coming to users this fall––it is fair to say Apple is jumping ahead.
And when iOS 18, iPadOS 18, and macOS Sequoia are available for download to tens of millions of existing devices around the world this fall, I would argue that Apple might then be considered one of the biggest AI companies on the planet. Federighi didn’t disagree. He just sees it differently.
“I don’t think we frame ourselves around being the ‘biggest technology anything’ company,” he said. “We are a user experience, a user product, company. That’s what we are centered around, and we judge ourselves by the experiences we deliver for our customers, and how we protect our customers and their interests.” AI, he says, is merely “a means to an end.”