Apple promised us a smarter, more capable Siri at WWDC 2024. The pitch was compelling: a Siri that understands your personal context, digs through your messages and emails, performs actions inside your apps, and evolves into a true assistant.
Two years later, that dream still remains a dream. But here’s the thing that might change the course of Apple’s assistant. According to reports, Siri is no longer tied to a single AI brain. Apple is building it to be flexible, capable of routing requests to whichever external model does the job best.
This made me ask a question. If Siri can use any AI, which one should it use? Right now, the default external model is ChatGPT. But I’d argue that Gemini is the more logical choice, and here’s why.
Siri is a search engine
Think about how you actually use Siri on a daily basis. You ask for the day’s weather. You ask for the closest eateries near you. You ask it to look up things on the web. A significant portion of Siri usage involves search or search-like queries, and no company on the planet does searches better than Google.
Google has spent decades building the most powerful search engine, and that expertise now flows directly into Gemini. When you ask Gemini something, it does not just pull from a language model. It extracts data from Google’s real-time web index, Google Maps, Google Shopping, and more.

Using that to power Siri’s search capability will take it to new heights that no other LLM provider can match.
Apple promised personal intelligence, but Gemini is delivering it
One of the biggest talking points from Apple’s WWDC 2024 announcement was personal intelligence. Apple showed Siri surfacing contextual information from across your apps, answering questions like “when is my mom’s flight landing?” or “show me photos of Stacy in her pink coat from New York.”

It was genuinely impressive in demo form. However, if I ask it to show me a photo of me wearing a black t-shirt, it shows random photos of people from the web wearing black t-shirts. I am not exaggerating when I say that Siri’s personal intelligence feature has been a colossal failure.

Meanwhile, Gemini quietly rolled out its own Personal Intelligence feature. It taps into your Gmail, Calendar, Google Photos, Drive, and more to reason across your personal data and answer complex, life-context questions. It’s not perfect, but at least it’s working.

That is almost word-for-word what Apple was demoing as a future Siri capability, except Gemini is doing it today. If Apple wants to accelerate its delivery of those features to users, Gemini might be the shortcut they need.
Gemini already does what Siri promised
Apple Intelligence deploys a compact, capable AI model across system apps, combining on-device processing for privacy with cloud-based computing for more demanding tasks. The on-device processing and privacy aspects are what set Apple apart from the competition. But it’s not alone now.

Gemini Nano is already doing this on Pixel and Samsung Galaxy devices. It powers offline summarization, smart replies, and contextual features, all without needing an internet connection. On Pixel 9 and newer, Gemini Nano is multimodal and can process images, texts, and languages directly on the device.

Apple is building toward what Google has already shipped. Rather than reinventing that wheel, using Gemini’s existing Nano deployment as the foundation for on-device Siri features would save Apple a lot of headache and money.
Gemini’s creative toolkit is packed
Here’s where it gets genuinely exciting. Gemini is not just a text model. It comes with an entire creative ecosystem that Apple could tap into.
Veo handles video generation at up to 1080p, with cinematic styles and clips longer than a minute. Lyria, from Google DeepMind, handles music and audio generation. For images, Nano Banana (Google’s image generation service) recently received a major upgrade, with improved text rendering, subject consistency, and support for any aspect ratio.

Apple has recently launched its own Creator Studio, giving users access to creative tools for a fixed monthly subscription. If the company is serious about competing with the likes of Adobe, it needs to offer generative capabilities. Guess what, Gemini already has all those capabilities, and it would make perfect sense to integrate it into Apple’s creative suite.

The partnership already exists
This point isn’t discussed enough. Google reportedly pays Apple around 20 billion dollars every year to remain the default search engine in Safari. That is one of the most valuable distribution deals in the history of tech. The relationship between Apple and Google is deep, long-standing, and financially enormous for both companies.
Extending that relationship from “Google powers Safari search” to “Gemini powers Siri’s AI features” is not a dramatic leap. It is a natural evolution of a partnership that runs half of what happens when you open a browser on your iPhone.
So which model would I stick with?
Claude is excellent for long-context reading and nuanced reasoning. ChatGPT has a massive ecosystem and strong coding and agent tooling. Both work great as user-chosen specialists. I myself use Claude on my computer.
But as the default engine under Siri’s hood? They are not the right pick. Gemini operates at the OS level on mobile, understands searches and personal contexts, exists in an on-device Nano form factor, and sits at the center of the most important commercial relationship Apple has with any tech company.
The pieces are all there. It is not a question of whether Gemini could power a smarter Siri. It is a question of whether Google and Apple can hash out a mutually beneficial deal. And if the rumors are anything to go by, things might be heading in this direction already.
