
June 30 EST: Apple’s top brass may finally be conceding what the rest of Silicon Valley has been whispering for years: Siri isn’t working.
According to a report by Bloomberg, Apple is now exploring deals with OpenAI and Anthropic to embed their large language models into a new version of Siri — a move that would mark a rare shift in strategy for a company known for keeping its software stack strictly in-house.
Talks are still early. Apple has reportedly asked both companies to deliver cloud-based versions of their models that could be tested inside Apple’s infrastructure. But even that step is telling: Cupertino is now open to the possibility that a foundational part of its user experience — one embedded in hundreds of millions of devices — may be better off powered by someone else.
Why It Matters Now
Apple’s AI ambitions have been public but vague. The company introduced “Apple Intelligence” earlier this year, combining on-device and server-side models to enable smarter features across iOS, macOS, and Siri. But according to insiders, its own models haven’t yet matched the performance of leaders like GPT-4 or Claude 3.
Privately, Apple knows the optics aren’t great. Siri — once the face of voice technology — now regularly ranks near the bottom in comparative tests. One recent evaluation cited by Investor’s Business Daily gave it a failing grade, while ChatGPT earned an “A.”
The gap isn’t just technical. It’s existential. If iPhone users begin defaulting to other tools for answers, scheduling, or everyday assistance, Apple loses more than utility — it loses control over the interface. That’s not a theoretical risk; it’s a commercial one.
Between Control and Competence
The tension for Apple is clear:
- It wants the best-performing model available.
- But it also wants to protect its brand of privacy, security, and ecosystem control.
Using OpenAI or Anthropic would require threading that needle carefully. Apple has built its hardware pitch — and its global regulatory posture — around the promise of user data that stays on device. Partnering with a third-party LLM means more server calls, more cloud involvement, and more potential exposure.
The company is likely to pursue a hybrid route: using its own models for on-device tasks, while calling on partners only for queries that demand deeper reasoning or conversational context. This is similar to how Apple is handling ChatGPT in iOS 18: opt-in, clearly labeled, and compartmentalized.
The Clock Is Ticking
The timeline matters. Apple is still targeting Spring 2026 to roll out the overhauled Siri. But in AI years, that’s an eternity. Competitors like Google and Microsoft are already deploying their assistant models across browsers, devices, and productivity suites. Even Amazon — whose Alexa has stalled — is now investing in large language model integrations.
Apple can’t afford to be the last one to show up to the AI table, especially not when its competitors are reshaping user behavior faster than Apple can rebrand it.
Bottom Line
This isn’t just a product decision. It’s a strategic inflection point. If Apple moves forward with Anthropic or OpenAI, it’s signaling a shift in how it builds core technology — and a rare willingness to admit its internal solution isn’t enough.
That doesn’t happen often in Cupertino. Which is exactly why this story is one to watch.
New Jersey Times Is Your Source: The Latest In Politics, Entertainment, Business, Breaking News, And Other News. Please Follow Us On Facebook, Instagram, And Twitter To Receive Instantaneous Updates. Also Do Checkout Our Telegram Channel @Njtdotcom For Latest Updates.

A Wall Street veteran turned investigative journalist, Marcus brings over two decades of financial insight into boardrooms, IPOs, corporate chess games, and economic undercurrents. Known for asking uncomfortable questions in comfortable suits.






