Apple Takes a Bite of AI
In 2021, at the height of work from home and Zoom meetings, Facebook rebranded itself Meta. Virtual worlds, or the metaverse, were the future of work and play, the company said. Soon after, other tech companies jumped aboard with their own metaverse plans. But just two years on, the metaverse was passé, and AI was in. Now, no earnings call was complete without mentioning it.
In both cases, you’d find one obvious player absent early on: Apple.
The company, so the story goes, is often late to big tech trends but redefines categories when it finally arrives. Last year, Apple weighed in on the metaverse with the Vision Pro, and this month, after a notable absence from the AI maelstrom, the company gave generative artificial intelligence the Apple treatment too: The new features across its devices will be called Apple Intelligence. (Presenters at its flagship software event, WWDC, failed to mention AI or artificial intelligence once. But make no mistake, it’s the same stuff.)
According to Apple, various AI models will soon stitch together personal context from across the apps on your device to get useful things done. In an example given at WWDC, the company said this might take the form of reconciling an incoming meeting request with your daughter’s play. Will you be late? The AI will check your calendar, texts, email, and traffic to let you know. It can also fetch photos of specific people from a quick description, generate images or custom emojis, and proofread and edit text. Some of this will be accomplished with Siri, thanks to a much-needed update.
To preserve privacy, simpler tasks will be handled by small language models on your device. For more complex tasks, the device will interact with larger models living on what Apple claims is uniquely secure AI cloud infrastructure.
But for the most complex tasks, the device will, with your permission, kick requests over to OpenAI’s GPT-4o. The new partnership was a big headline-maker. It involves no cash for now: OpenAI will get exposure to Apple customers in exchange for granting access to its top AI model. Meanwhile, Apple has made it clear it’s open to other partnerships in the future. This could mean a similar arrangement with Google or Anthropic. (How secure and private these third-party arrangements will be remains a point of contention.)
So, how’d Apple do? The reviews were decidedly mixed: Some people thought the presentation knocked it out of the park. Others said the new tools were boring, a security and privacy nightmare, ethically flawed, or proof Apple isn’t capable of fielding its own advanced AI.
Much of what was announced is still off in the future, so we should be wary of judging a splashy tech demo before it’s in the hands of everyday people. Still, the strategy itself is worth reviewing: It’s a first look at where one of the most valuable companies in the world, with 2.2 billion devices in the wild, is going with its own AI offerings.
As a Fast Company headline summed it up: “OpenAI Promised to Give Us Her. Apple Is Giving Us Gary From 'Veep'.” The former is a vision of AI from science fiction, an endpoint we can imitate but not yet match. The latter is less striking but focuses on making today’s capabilities work for average people, regardless of their mileage with AI. Writing for Wired, Will Knight said Apple is selling near-term AI as a “feature not a product.”
“Rather than a stand-alone device or experience, Apple has focused on how generative AI can improve apps and OS features in small yet meaningful ways,” he wrote. The more the tech disappears into the background, the easier it will be for more people to use and trust it.
And although Apple really does look to be lagging AI’s cutting edge, it’s come up with some clever workarounds to make the most of things as they stand.
By offloading as much work as possible to local models—which may be a decent amount—they’re saving on sky-high computing costs in the cloud. For the most complex and therefore pricey tasks, they have a free option. OpenAI will foot the bill in the hopes they’ll pick up paying subscribers. In addition, the riskiest types of requests—think Google’s AI Overviews and other chatbot fails of recent history—are going to a third party.
Lastly, these AI features will only be available on Apple’s newest phones due to legitimate performance requirements. (Even small language models require some nifty hardware to run.) If the features work smoothly and prove popular, they could push people to upgrade their phones to gain access, driving a cycle of device sales.
To be fair, Apple isn’t the first to sketch out an AI-enhanced operating system instead of a chatbot-only approach. Microsoft wants to head in a somewhat similar direction with its Copilot+ PCs, announced last month. (Though it’s already in hot water there.) And Google’s AI Overviews experiment in search—which has notoriously been giving some rather poor advice—shows the models’ weaknesses are magnified when tested at the scale of billions. Finally, Apple’s last-but-not-least strategy doesn’t always pan out right away. After lower-than-expected sales, the company is reportedly moving on from the Vision Pro in its current form in favor of a redesigned device at a lower price point.
Still, the announcement rounds out the AI strategies of the world's biggest tech companies. Generative AI will be coming to pretty much everyone very soon.
Recommended by LinkedIn
More News From the Future
OpenAI dissolves its safety team. Ilya Sutskever launches a rival startup.
Safety shakeup. A year ago, OpenAI announced plans to dedicate 20 percent of its computing resources to safety research by a newly created superalignment team. Led by cofounder and chief scientist Ilya Sutskever and AI researcher Jan Leike, the team aimed to solve the AI alignment problem—wherein superintelligent AI misaligned with our best interests runs amok—in the next four years. But last month, Sutskever and Leike left the company, with Leike saying it was no longer sufficiently prioritizing the research. Shortly after, OpenAI dissolved the superalignment team. The startup later formed a new safety team led by board members Sam Altman (who’s also CEO), Adam D’Angelo, and Nicole Seligman to make safety recommendations and decide how to implement them.
AI for good? OpenAI was famously founded with the mission of building safe artificial general intelligence. But after pivoting to a capped-profit structure and taking over $10 billion in investments, mostly from Microsoft, some argue the organization’s approach has changed, placing greater emphasis on creating AGI and rapidly pushing out commercial products to fund it than on making sure it’s safe. The recent safety shakeup supports this narrative in the minds of critics, and this month, 13 former OpenAI and Google DeepMind employees signed an open letter calling on AI companies to give current and former employees the right to talk openly of the technology’s risks without the fear of retaliation.
OpenAI Redux. While OpenAI continues along its current trajectory, Sutskever and cofounders Daniel Gross, former lead of AI at Apple, and AI researcher Daniel Levy, have launched a new startup called Safe Superintelligence, Inc. “This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever told Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.” How they'll fund the company isn’t yet clear, but the team’s star status may be enough for starters. With Sutskever involved—a key player in the birth of both deep learning and large language models—it’ll be well worth watching.
The first Starship launch last year destroyed the launchpad and met a fiery end after the vehicle’s first and second stages failed to separate. Since then, in a string of explosions and vehicle disassemblies, the company’s made notable strides. This month, they finally brought both Starship stages home for controlled splashdowns in the Gulf of Mexico and Indian Ocean—big steps toward the goal of making the vehicle fully reusable. For the next launch, which could come as soon as July, the company will try to return the booster to the launchpad and catch it with a pair of robotic arms on the launch tower.
AI is impacting politics in some truly unexpected ways.
Though people have long worried AI deepfakes could influence elections, few predicted AI would run for office. Yet here we are. A chatbot dubbed AI Steve is listed on the ballot in the UK. Of course, the bot won’t be making decisions or speeches. Rather the plan is for it to chat with thousands of people to help a human politician form better policy from direct voter feedback. Its creator, Steve Endacott, will represent those new policies in parliament. Bizarrely, this isn’t a lone example. Another chatbot, called VIC, isn’t technically on the ballot, but it would (perhaps unadvisedly) be used to make decisions—that is, if VIC’s creator can convince Wyoming’s secretary of state to allow him to run, he sidesteps a ban by OpenAI, which says he’s violated its terms of service, and he's elected.
Upcoming Events
Apply for our fall Executive Programs
Preliminary experts have been announced for our October and November Executive Programs. Seats are filling quickly, so explore the lineups and start your application today.
Ready to apply? Choose one of our remaining 2024 dates:
Thank you for stepping into the future with us! Join our global community of over 200,000 futuremakers - sign-up and unlock early access to the Singularity Monthly newsletter and discover the most impactful technology breakthroughs - get a glimpse of tomorrow, today.
The Singularity Team