Apple is cooking up a very different Siri from the one you know, an assistant that will understand context, remember conversations and be able to coordinate tasks between apps without losing the thread. Behind the scenes, the company has set up a testing environment with a ChatGPT-style app that won’t see public release, but which is being used to validate the new architecture, polish features and measure whether a traditional conversational format provides value. Can you imagine picking up a travel plan days later and Siri remembering every detail without having to repeat it all?
A secret testing ground for the new Siri
Apple’s lab is not a consumer chatbot, but an internal tool with which teams experiment with a Siri that is more context-aware and capable of acting in a chain across applications. The idea is that you could, for example, pick up a shopping list in Notas, book a delivery window in your shopping app and place reminders in the calendar, all in the same conversation and without breaking the flow. Moreover, that conversation can be maintained over time, save history and allow follow-up questions connected to each other, even jumping from iPhone to Mac without losing continuity.
This approach doesn’t only aim for more natural chats, but to enable intelligent workflows that reduce back-and-forth between apps. The internal app allows managing multiple threads, storing and referencing previous chats and sustaining prolonged exchanges for complex tasks, like planning a trip and modifying it days later. In parallel, Apple is testing different language models, from OpenAI and Anthropic options to variants of Google Gemini, while also validating its own models to preserve privacy when the assistant needs to fetch user data on-device.
Architecture 2.0 and an answers engine
The change goes beyond ‘putting an LLM’ in front: Apple has decided to rebuild Siri from the ground up because the original architecture fell short in handling context, long conversations and deep integration with apps. This second generation relies on large language models to handle dialogue, maintain long-term memory and orchestrate actions. In fact, the company is building internally a system called World Knowledge Answers, a true answers engine capable of combining text, photos, videos and points of interest, and of summarizing search results to offer a clear, actionable synthesis.
This engine fits with the ambition to turn Siri, Safari and Spotlight into entry points to a smarter search layer. Instead of being limited to timers or basic commands, the assistant will be able to process complex questions and return useful answers in one place, a move that places it alongside offerings like Google’s AI Overviews or tools like Perplexity. The expected result? Continuous conversations, more human responses and chained tasks that feel native to the ecosystem — something that recalls that leap in quality you experience when a feature is truly integrated into the system instead of working as an external add-on.
Partnerships, privacy and Apple’s timeline
Apple is betting on a pragmatic approach: combining different providers to move faster without ceding control of the experience. According to reports, the company has held talks with Anthropic, OpenAI and Google, and is even testing a customized Gemini model under agreement to power some Siri capabilities, all while evaluating its own models for planning functions and using its Foundation Models to search personal data locally, keeping sensitive information in-house. It’s the balance between power and privacy that many users expect from the manufacturer.
As for timing, the roadmap points to the LLM-enabled version arriving in early 2026, possibly as part of an update like iOS 26.4 planned for March. Before that, at the end of next year, Apple plans to show a new look for Siri with a more human feel, inspired by the Mac Finder logo, and later, in 2026, another visual refresh would arrive accompanied by an integrated health feature that would serve as the foundation for a paid wellness service. At the same time, the company is said to have ruled out launching a standalone chatbot app to focus on embedding AI in its native search layers, although it continues to check — with that testbed — whether a classic chat window fits into the experience.
The timeline may seem conservative compared to rivals that launch new features monthly, but that leeway allows them to fine-tune so everything feels organic and coherent with iOS and macOS. At ActualApp we see it as the step from ‘basic mode’ to ‘turbo mode’: a Siri with memory, context and hands inside the apps, evaluated with technologies like ChatGPT and Google Gemini, and designed to integrate into every corner of the system. If you live in the Apple ecosystem, this looks to be the assistant’s biggest leap since its birth; the question is whether they’ll manage to keep their privacy hallmark and that flawless integration while matching competitors’ capabilities.