Apple AI Shakeup: Amar Subramanya to Lead Foundation Models and Siri Push

  • Thread Author
Apple’s AI leadership has been reshuffled at a pivotal moment: long‑time AI chief John Giannandrea is stepping down to become an adviser and retire in spring 2026, while Amar Subramanya — a senior engineering leader with deep experience at Google and a brief stint at Microsoft — joins Apple as Vice President of AI, reporting to Craig Federighi and taking responsibility for Apple Foundation Models, machine‑learning research, and AI safety and evaluation.

A person in a suit stands before a futuristic exhibit about Foundation Models and on-device AI.Background / Overview​

Apple confirmed the leadership change in a corporate press release that frames the move as a planned transition intended to accelerate the company’s AI work and align model and product responsibilities more closely with software engineering. Under the new map, portions of the organization that Giannandrea previously oversaw will be redistributed to Sabih Khan (COO) and Eddy Cue (Services), while Subramanya will be placed under Craig Federighi (SVP, Software Engineering). This shake‑up arrives amid mounting external pressure over Apple’s pace in generative AI and assistant capabilities. Apple launched “Apple Intelligence” publicly in 2024 with strong emphasis on privacy and on‑device processing, but several marquee features — most notably a much more personalized, multimodal Siri — have been delayed, with internal targets now pointing to a spring 2026 rollout for the advanced Siri refresh.

Why this matters: leadership, optics, and product urgency​

Apple’s decision to hire a high‑profile engineering leader with foundation‑model experience and to realign reporting lines is meaningful on three levels.
  • Product velocity: Placing model and research leadership directly under Software Engineering reduces handoffs between research and OS‑level product teams, signaling a push to shorten development cycles.
  • Safety and governance: Apple explicitly tasked Subramanya with AI Safety & Evaluation, placing evaluation and monitoring at the same organizational level as model development — a structural signal that Apple intends to formalize testing and red‑teaming practices.
  • Competitive positioning: The move comes as rivals continue to productize large models aggressively and as hardware‑oriented challengers (including the high‑profile acquisition of Jony Ive’s hardware startup by OpenAI) press an already fast‑moving market.
These are not cosmetic changes. They map directly to the engineering tradeoffs Apple must navigate: balancing on‑device privacy and efficiency with the scale and capability of cloud‑hosted foundation models.

John Giannandrea: legacy and limits​

What he built​

John Giannandrea joined Apple in 2018 after a long tenure leading search and AI teams at Google. During his time at Apple he consolidated disparate machine‑learning efforts into a coherent organization responsible for foundation models, Search and Knowledge, Machine‑Learning Research, and AI Infrastructure — structures that underpinned Apple Intelligence. Apple credits him with building a “world‑class team” and elevating AI into Apple’s executive agenda.

The visible shortcomings​

At the same time, Giannandrea’s tenure coincided with a public reckoning over product execution speed. Apple’s privacy‑first, on‑device posture complicated rapid iteration with large, cloud‑centric models, and the Apple Intelligence roadmap experienced high‑profile delays. The most visible symptom was the postponed Siri overhaul, now targeted for spring 2026 — a timeline that, repeated in press reports, has left executives and investors impatient.

The right time to step aside?​

Apple framed Giannandrea’s transition as orderly — adviser now, retire in spring 2026 — which helps preserve continuity. But the change also reads as an acknowledgment that the company requires a different operational posture to compress delivery timelines without abandoning Apple’s privacy commitments. Industry coverage largely treats the move as both a reward for past work and a response to execution pressure.

Amar Subramanya: who he is and what he brings​

Provenance and profile​

Amar Subramanya spent roughly 16 years at Google, where he rose through research and engineering ranks and is widely reported to have led engineering for Google’s Gemini assistant, working closely with teams tied to DeepMind research. In mid‑2025 he joined Microsoft as Corporate Vice President of AI, a brief but high‑profile move before Apple recruited him. Apple’s statement highlighted his background as a deep‑technical leader capable of bridging ML research and product engineering — the exact skill set Apple says it needs.

Strengths that match Apple’s needs​

  • Deep research pedigree: Subramanya’s academic work focuses on semi‑supervised learning and graph‑based models, methods that can reduce labeled‑data dependencies — a useful fit for Apple’s privacy constraints.
  • Productized scale: Experience leading assistant engineering at Google gives him direct, relevant expertise for scaling multimodal conversational systems to billions of users.
  • Cross‑company perspective: The recent Microsoft stint exposes him to a different execution culture and to efforts aimed at commercializing foundation models — experience Apple will value as it seeks to speed product rollouts while managing cloud dependencies.

Caveats about public record​

Some biographical and operational details — specific team sizes, internal milestones, and compensation — are reported inconsistently across outlets and often rely on anonymous sources. These numeric claims should be treated as indicative rather than definitive unless confirmed by primary disclosures. The academic and career highlights reported by Apple and multiple press outlets are, however, consistent.

Organizational map: who now reports where​

Apple’s public announcement makes a deliberate split:
  • Amar Subramanya → reports to Craig Federighi; leads Apple Foundation Models, ML Research, and AI Safety & Evaluation.
  • AI Infrastructure and Search & Knowledge functions that were part of Giannandrea’s remit are being moved to Sabih Khan (COO) and Eddy Cue (SVP, Services) to align with operational and product delivery responsibilities.
This separation is classic product design: put pure model and research work inside engineering; tie infrastructure and delivery to executives who manage operations and services. The goal is to reduce cross‑functional friction and speed up shipping, but it also creates short‑term coordination costs as teams rewire reporting lines and prioritize new KPIs.

Product and technical implications​

1) Foundation models: build, license, or both?​

Apple’s public language centers on Apple Foundation Models, implying a strategy to either build in‑house base models or to heavily customize third‑party ones under strict safety and privacy constraints. Building proprietary foundation models at scale requires significant compute, data, and engineering investment; licensing or partnering (as Apple has done with OpenAI for ChatGPT features) provides a shortcut but raises questions about control, privacy, and latency.

2) On‑device vs cloud: the hybrid tradeoff​

Apple’s core competitive promise is privacy and device integration. But modern LLM capabilities often rely on cloud compute. The likely architectural pattern is hybrid:
  • Keep latency‑sensitive and privacy‑critical inference on device with optimized, compressed models.
  • Send resource‑heavy, contextual queries to a private cloud compute (PCC) under Apple control, with non‑training guarantees and strict telemetry controls.
This approach preserves Apple’s privacy posture but requires heavy investment in model compression, runtime optimization for Apple Neural Engine, and private cloud capacity.

3) Safety and evaluation as first‑class engineering​

Placing AI Safety & Evaluation in Subramanya’s remit is a meaningful signal. Apple needs continuous, auditable evaluation pipelines that measure hallucinations, bias, privacy leakage, and adversarial behaviors. Implementing these pipelines without throttling product velocity is a major engineering challenge. Success here could become a market differentiator, especially as regulation tightens globally.

4) Siri: the hardest mile​

Integrating a more capable Siri is not merely a modeling problem. Siri touches system UI, developer APIs, third‑party apps, device resource management, and the user experience of billions of users. The user experience, error handling, and recoverability matter more to most users than raw model capability. These human‑facing integration problems are where many AI projects fail — and where Subramanya’s prior assistant engineering experience will be tested.

Competitive landscape and strategic pressure​

Rivals and the talent wars​

Google, Microsoft, and OpenAI remain aggressive in productizing foundation models. Google advances Gemini across search and assistant, Microsoft couples Copilot with Office and Windows, while OpenAI has rapidly expanded into hardware design by acquiring Jony Ive’s startup io in a high‑profile deal. These moves compress Apple’s leeway both on features and on the talent market.

The Jony Ive factor​

OpenAI’s acquisition of Jony Ive’s io (reported at about $6.4–$6.5 billion) signals an ambition to build AI‑native hardware that could challenge the iPhone’s longstanding dominance. Apple’s defense remains its vertical integration of silicon, OS, and services — but Apple must now accelerate not just software, but hardware‑software co‑design thinking in ways that anticipate competitors combining industrial design and generative AI.

Partnerships and stopgaps​

Apple has also struck a strategic partnership to integrate ChatGPT into certain iOS experiences (including optional use within Siri), a move that buys feature parity while Apple scales its internal models. The integration includes privacy protections and user consent flows, and it is powered by GPT‑4o in Apple’s stated plan. This partnership shows Apple’s willingness to be pragmatic about third‑party models while building its internal capabilities.

Financial and investor context​

Investor attention to Apple’s AI posture is real. Apple’s stock performance in 2025 shows a rally driven in part by strong device cycles, with some outlets noting double‑digit gains during certain intervals of 2025 — figures vary by the window chosen. These market moves reveal that investors weigh near‑term hardware strength against longer‑term concerns about AI spending on cloud infrastructure and frontier models. Apple’s on‑device strategy is less capital‑intensive than rivals’ cloud‑heavy approaches, but it risks slower feature parity in fast‑moving AI categories. Note: public reporting on stock percentage moves depends heavily on the chosen timeframe; readers should treat single‑figure headlines (for example, “up 16% in 2025”) as shorthand and check the precise date ranges used in each analysis. Several outlets report different percentage moves across differing periods.

Strengths, risks, and a realistic scorecard​

Strengths​

  • Vertical integration: Apple owns silicon, OS, and UX, a unique advantage for bringing efficient, on‑device models to market.
  • Privacy brand: Apple’s privacy posture is a market differentiator that resonates with regulators and segments of customers.
  • New technical leadership: Subramanya’s combination of deep research and large‑scale engineering experience is well aligned to the tasks Apple prioritized publicly.

Risks and open questions​

  • Execution under deadline pressure: The spring 2026 Siri target is a hard deadline; compressed timelines risk quality compromises or another public delay.
  • Compute and data investments: Building or even substantially customizing foundation models requires significant cloud and data investments. Apple has traditionally spent less on cloud infrastructure than rivals — a deliberate choice that may slow its pace unless increased.
  • Talent and cultural fit: Bringing leaders from Google and Microsoft helps attract talent but also requires integrating different organizational cultures and operating rhythms. Attrition from repeated reorganizations is a known hazard.
  • Third‑party dependency risk: Reliance on external models for certain features (e.g., ChatGPT) helps in the short term but creates dependency and privacy tradeoffs that must be carefully managed.

What success will look like — concrete indicators to watch​

  • Demonstrable Siri improvements: measurable gains in conversational ability, cross‑app actions, and lower failure rates in real‑world testing by mid‑2026.
  • Evidence of Apple Foundation Models: research publications, product previews, or dev tools showing Apple’s models and optimization techniques.
  • Clear safety and evaluation reporting: Apple publishing more detailed safety metrics, red‑team outcomes, or third‑party audits.
  • Strategic hires and retention: senior ML and infrastructure hires joining Subramanya’s teams and low attrition in critical engineering pods.
  • CapEx and cloud activity: observable increases in Apple’s cloud and private compute investments or new data‑center activity tied to AI workloads. (If Apple does not materially increase cloud spending, its ability to compete on model capability at scale will be constrained.

Final analysis: realistic optimism — but the hard work starts now​

Apple’s leadership reshuffle is the kind of strategic reset the company needed to align model engineering, safety, and product delivery under a single technical leader while delegating operational plumbing to executives focused on shipping. Bringing Amar Subramanya onboard is a signal — and it’s a strong one — that Apple intends to close the gap on foundation models while retaining its emphasis on privacy and on‑device performance. That said, a single hire and a new org chart do not guarantee product outcomes. Apple faces three compound engineering challenges: (1) delivering foundation‑level capability without compromising privacy, (2) optimizing models and runtimes for Apple Silicon at scale, and (3) integrating those capabilities into user experiences that are robust, predictable, and delightful. These are hard problems with costly infrastructure and human capital demands. Success requires disciplined tradeoffs, transparent evaluation, and the patience to iterate without overpromising.
Apple has structural advantages that few companies enjoy: integrated hardware, world‑class UX, and a vast installed base. Turning those into generative‑AI wins will depend on whether Apple can accelerate execution while keeping its privacy promises intact — and whether Subramanya and Craig Federighi can translate research excellence into shipped features on a credible timetable. The coming 6–12 months will tell whether this is a leadership reset that closes the gap, or an organizational reshuffle that delays tough technical decisions until later.

Concluding assessment: the appointment of Amar Subramanya is a decisive move to refocus Apple’s AI effort around foundation models, safety, and engineering velocity. It addresses many of the criticisms that have accumulated around Apple Intelligence’s early rollout. But the company now faces the harder task of delivering measurable product improvements — notably for Siri — without diluting the privacy and integration advantages that define Apple’s brand. The industry will be watching for concrete technical outputs, transparent safety practices, and signs that Apple is materially investing in the cloud and compute needed to match the scale and responsiveness of its rivals.
Source: Tekedia Giannandrea Steps Down, Subramanya Steps In: Apple Shakes Up AI Leadership Amid Criticism and Project Delays - Tekedia
 

Blue schematic showing Model Context Protocol (MCP) with On-Device Registry (ODR) and File Explorer.
Microsoft has quietly changed the behavior of the Windows 11 right‑click menu so that the controversial AI Actions parent no longer forces itself into every context menu — but the fix comes with caveats, staged rollouts, and deeper platform changes that both explain the behaviour and raise new questions about control, privacy, and developer responsibility.

Background​

For months Windows 11 users and testers have complained that Microsoft was “stuffing AI” into too many surfaces of the OS, with the File Explorer right‑click menu becoming a frequent point of friction. A prominent annoyance was the AI Actions submenu: a parent entry that often merely launched first‑party apps (Photos, Paint, Teams, Copilot/365 features) rather than performing in‑place tasks, and — critically — remained visible as an empty placeholder after users turned off the underlying app actions. That empty header consumed screen space and made an already crowded context menu feel more cluttered rather than more useful. This issue prompted repeated feedback from Insiders and wider community discussion. On December 5 Microsoft published Windows 11 Insider Preview Build 26220.7344 (KB5070316) to the Dev and Beta channels. In the File Explorer fixes the release notes include the short but consequential line: “If there are no available or enabled AI Actions, this section will no longer show in the context menu.” That single change is the basis for the new behavior: when no App Actions are registered or enabled, the AI Actions parent should be suppressed entirely.

What changed in Build 26220.7344 (KB5070316)​

The practical, visible change​

  • If you disable every App Action exposed by installed apps (via Settings), File Explorer now evaluates the list of available AI Actions at menu build time and hides the parent entry when there’s nothing to show.
  • That means users who prefer a minimal right‑click menu can remove the AI Actions surface without hacks or registry tweaks — provided the build and feature rollout have reached their device.

How to hide AI Actions right now (step‑by‑step)​

  1. Open Settings (Win + I).
  2. Go to Apps.
  3. Click Actions.
  4. Toggle off every app listed that exposes App Actions (Paint, Photos, Teams, Microsoft 365 Copilot entries, and any others).
  5. Right‑click a supported file in File Explorer — the AI Actions parent should no longer appear if no enabled actions remain.
This sequence reflects Microsoft’s intended user control path: the UI toggle truly removes the functional hooks and, with the new conditional logic, removes the UI footprint.

The catch: rollout, ghost labels, and what still trips users up​

Even with the official fix in the build notes, two practical issues matter to anyone expecting an immediate, universal change.
  • Staged / server‑gated rollout: Microsoft explicitly notes many features are rolled out gradually (Controlled Feature Rollout). That means two machines on the same build may behave differently while the feature gate ramps. If you update and still see the empty AI Actions header, it may simply be waiting for the staged flag to reach your device.
  • Residual or “ghost” labels in older previews: Community reports and screenshots from earlier builds showed the AI Actions parent lingering as an empty header even after app toggles were switched off. Early coverage suggests that while the code path to hide the header is present in 26220.7344, some Insider devices still showed a ghost category until server flags changed or subsequent cumulative updates fixed remaining edge cases. Users should expect a short window of inconsistency as the staged rollout proceeds.
In short: the fix is real — but delivery can be uneven for reasons unrelated to your machine, and you may still encounter the old behavior briefly.

Why this small UX change matters​

Restoring a basic principle of user control​

A toggle that visibly leaves UI chrome in place after being turned off is a classic usability failure. Making the AI Actions parent disappear when no actions remain restores that elementary expectation: if you opt out, the system should stop advertising the feature. That’s a win for predictability and surface cleanliness.

It’s symptomatic of a larger problem: UI bloat and discoverability​

The AI Actions debate isn’t only semantic. It highlights how operating systems and apps accumulate small surfaces — shell verbs, app actions, OneDrive hooks, compression tools, sharing options, and now AI surfaces — which together make the context menu tall, repetitive, and harder to scan. Consolidating or hiding empty parent entries is a helpful band‑aid, but the broader problem requires consistent platform decisions about grouping, prioritization, and developer registration rules.

The plumbing behind the scenes: Model Context Protocol (MCP) and agent connectors​

Build 26220.7344 did more than change a conditional in a menu: Microsoft also announced native support for the Model Context Protocol (MCP) and introduced built‑in agent connectors (File Explorer and Windows Settings) as part of a public preview for agentic capabilities on Windows. MCP is an open protocol that standardizes how AI agents discover and interact with tools and data, and Microsoft’s implementation centers around an on‑device registry (ODR) to surface agent connectors in a managed, auditable way.
  • The File Explorer connector exposes scoped file access to agents (with user consent) and can enable natural‑language search on Copilot+ hardware.
  • The Settings connector lets agents query or navigate settings pages on behalf of users, again with OS‑level controls.
  • The ODR gives Windows the ability to contain connectors in secure identities, apply policies, and keep audit trails of agent activity.
This is important because the proliferation of AI surfaces (AI Actions, Copilot integration, context‑aware suggestions) is not purely cosmetic. Microsoft appears to be building the platform to let agents and apps interact in a controlled manner, rather than hard‑coding every AI micro‑feature directly into the shell.

Strengths: why Microsoft’s approach has merit​

  • Platform‑level consistency: Implementing MCP and an on‑device registry provides a single place to manage agent‑to‑app relationships and user consent, which can reduce ad‑hoc integrations and give administrators auditability. That’s a net gain for enterprise governance and security posture.
  • User control via Settings: Giving users an explicit Settings page to enable/disable App Actions and tying the context menu to that state is cleaner than hidden feature flags or registry hacks. The explicit path supports discoverability for power users and admins.
  • Developer tooling and a standard protocol: If MCP gains broad adoption, independent developers and third‑party tools can expose capabilities in a predictable, interoperable way — avoiding bespoke integrations that caused the earlier menu clutter.

Risks and open questions​

1) Privacy and consent in agentic workflows​

A protocol that exposes tools and data to agents necessarily increases the attack surface for automation. Microsoft’s design — identities, audit trails, and consent gating — is encouraging, but deploying system‑level agent connectors raises two questions:
  • How granular and transparent will consent prompts be in real workflows?
  • Will telemetry, default settings, or OEM customizations nudge users into granting broad access?
These questions matter for both consumer trust and enterprise compliance. The on‑device registry and Agent ID concepts are positive technical controls, but real security depends on defaults, discoverability of consent dialogs, and the clarity of audit logs.

2) Fragmentation between WinUI apps and the Shell​

Microsoft’s new Split Context Menu (a WinUI control with a SplitMenuFlyoutItem pattern demonstrated in a WinUI Community Call) promises to reduce vertical bloat by grouping related actions into split rows with a primary action and a chevron for secondary choices. That design can cut menu height substantially in demos, but it’s currently focused on WinUI‑based apps and the Windows App SDK; it’s not yet fully promised for the global File Explorer shell. That creates a risk of uneven user experience: some apps will get the cleaner split menu while legacy shell surfaces and third‑party shell extensions may remain unchanged unless Microsoft extends the pattern to the shell.

3) Developer adoption and migration burden​

The split menu pattern and MCP connectors both require developer updates. Many apps still use legacy shell verbs, Win32 integrations, or COM‑based shell extensions; migrating those behaviors to WinUI controls or MCP connectors will take tooling, documentation, and time. Without a clear migration path, Windows could end up with a mix of legacy menus, new split menus, and agent‑driven features — a confusing middle state for users and maintainers.

4) UX semantics vs. actual function​

A common criticism of AI Actions was that the parent label implied AI doing work for you, while in practice most items simply opened an existing app to complete the task. If Microsoft intends these entries to become true in‑place AI actions, it must ensure consistency between label, expectation, and function; otherwise, users will continue to feel misled. The MCP architecture suggests a path toward richer agentic behaviors, but until agents can truly operate in‑place (with clear consent and transparent outputs), the UI should avoid overstating capabilities. Community reaction shows a fine line between helpful shortcuts and misleading marketing.

Practical guidance for administrators and power users​

  • If you want to reduce context‑menu clutter today: follow the Settings > Apps > Actions path and turn off every app action you don’t want. On devices in which build 26220.7344 and the rollout gate have applied, the AI Actions parent should disappear.
  • If you’re in an enterprise: test the MCP preview carefully in a lab. The promise of agent connectors and the Agent Workspace is powerful for automation, but enterprises should validate consent flows, auditability, and policy control before enabling agentic features broadly.
  • Track the split context menu lifecycle: the WinUI control is an appealing developer pattern, but it is primarily a WinUI/Windows App SDK feature for now. Expect a phased adoption curve and plan for migration of legacy shell integrations.

A wider perspective: why Microsoft is doing this now​

Microsoft’s changes around AI Actions, MCP, and split menus are not isolated patches but part of a bigger strategy to enable agentic workflows while attempting to maintain control, security, and manageability at the OS level. The company is moving from point solutions (add an AI button here or an "AI Actions" submenu there) to platform primitives:
  • A discovery and registration mechanism for agents (MCP + ODR)
  • A managed runtime environment for agents (Agent Workspace, Agent ID)
  • UI patterns that can scale developers’ choices without overwhelming users (SplitMenuFlyoutItem)
That pipeline — platform, connectors, UI — is sensible. The friction arises in the in‑the‑wild transition: developer readiness, legacy compatibility, staged rollouts, and user expectations about what “AI” means in the UI. The context menu fix in 26220.7344 is a small but important corrective step, and the MCP preview explains why Microsoft built the feature the way it did in the first place: to create discoverable hooks for agents rather than hard‑wiring every single AI micro‑feature into the shell.

What to watch next​

  • Whether the Split Context Menu moves from WinUI‑only demos into the global shell (File Explorer) and how Microsoft will migrate legacy shell verbs into the new model. Early demos suggested menu height reductions of up to ~30–38% in specific cases, but these are prototype figures and will vary in practice.
  • How Microsoft refines the MCP consent, audit, and policy surfaces during the public preview period. The effectiveness of on‑device registries and Agent IDs depends on clear, discoverable controls and robust auditing for both consumers and enterprise admins.
  • The behavior of the AI Actions suppression across the rollout: whether server gating causes broad variability and how quickly Microsoft will close remaining edge cases where a ghost label persists despite toggles being off. Community feedback and subsequent Insider flights should make the rollout visible quickly.
  • Independent verification of other UI refresh claims reported in some outlets — for example, the claim that the Run dialog will receive a major visual refresh “for the first time in over 30 years” is repeated in a handful of articles but lacks corroboration in Microsoft’s release notes or primary developer feeds; treat that claim as unverified until Microsoft or an official SDK/WinUI update confirms the change.

Bottom line​

The change in Windows 11 Build 26220.7344 (KB5070316) that hides AI Actions when none are available is a concrete and welcome fix for users who wanted a cleaner right‑click menu. It’s a small usability correction with outsized symbolic value: toggles should lead to visible effects. But that tidy UX improvement is only one layer of a broader platform shift — Microsoft is rolling out MCP and agent connectors, and prototyping a split context menu that could meaningfully reduce menu clutter over time.
Those platform moves are promising, but they come with important trade‑offs. The success of this phase will depend on clear consent, robust auditing, predictable defaults, and thoughtful migration paths for legacy apps. For now, users can regain some control over their context menus; administrators and developers should treat MCP and split‑menu primitives as preview‑level primitives to test and evaluate rather than turnkey solutions. The company has started to listen and iterate — and the next challenge is ensuring the architecture and the UX evolve in step, not at cross‑purposes.
Source: Windows Report Microsoft Finally Lets Users Hide AI Actions in Windows 11 — But There's A Catch
 

Back
Top