Microsoft's Microsoft 365 Copilot is no longer a single‑vendor show: starting today the company is adding Anthropic’s Claude family — notably
Microsoft launched Microsoft 365 Copilot to bring large language model capabilities directly into Word, Excel, PowerPoint, Outlook, Teams and bespoke enterprise workflows. That early strategy leaned heavily on OpenAI models and a deep partnership that included substantial investment and Azure integration. Over time the technical, commercial and scale realities of running generative AI at billions‑of‑calls scale have driven Microsoft to pursue a multi‑model orchestration approach: select the best model for the task rather than the same model for every request.
The announcement made on September 24, 2025 expands model choice inside two primary Copilot surfaces today:
This cross‑cloud approach has several implications:
Admins need to focus on a few concrete areas:
Key takeaways:
Conclusion
Adding Anthropic’s
Source: The Verge Microsoft embraces OpenAI rival Anthropic to improve Microsoft 365 apps
Source: Neowin Microsoft 365 Copilot is ditching OpenAI exclusivity for Anthropic's models
Source: OODA Loop Microsoft embraces OpenAI rival Anthropic to improve Microsoft 365 apps
Source: The Economic Times Microsoft brings Anthropic AI models to 365 Copilot, diversifies beyond OpenAI - The Economic Times
Source: The Edge Malaysia Microsoft partners with OpenAI rival Anthropic on AI Copilot
Source: CNBC https://www.cnbc.com/2025/09/24/microsoft-adds-anthropic-model-to-microsoft-365-copilot.html
Source: Microsoft Expanding model choice in Microsoft 365 Copilot | Microsoft 365 Blog
Claude Sonnet 4
and Claude Opus 4.1
— as selectable backends inside Copilot, giving organizations the ability to route specific Copilot workloads to Anthropic models while keeping OpenAI and Microsoft’s own models in the mix.
Background / Overview
Microsoft launched Microsoft 365 Copilot to bring large language model capabilities directly into Word, Excel, PowerPoint, Outlook, Teams and bespoke enterprise workflows. That early strategy leaned heavily on OpenAI models and a deep partnership that included substantial investment and Azure integration. Over time the technical, commercial and scale realities of running generative AI at billions‑of‑calls scale have driven Microsoft to pursue a multi‑model orchestration approach: select the best model for the task rather than the same model for every request. The announcement made on September 24, 2025 expands model choice inside two primary Copilot surfaces today:
- The
Researcher
reasoning agent can now be powered by either OpenAI’s reasoning models or Anthropic’sClaude Opus 4.1
. Administrators must enable Anthropic models for their tenant before employees can pick them. Copilot Studio
, the low‑code/no‑code agent authoring environment, now offers bothClaude Sonnet 4
andClaude Opus 4.1
as selectable engine options for custom agents.
What Microsoft actually announced
The immediate, visible changes
- Model choice in Researcher: Users of the Researcher agent will be able to choose Anthropic’s
Claude Opus 4.1
as an alternative to OpenAI‑powered reasoning for deep, multi‑step research and report generation. This choice is surfaced where Researcher is available and is subject to administrator enablement. - Copilot Studio model options: When building or customizing agents in Copilot Studio, developers and administrators can now pick
Claude Sonnet 4
(optimized for high‑throughput, production tasks) orClaude Opus 4.1
(Anthropic’s higher‑capability reasoning/coding model) as the agent’s model. - Rollout and availability: Microsoft says model choice is available starting immediately to licensed organizations participating in programs like Frontier (early access) and through gradual enterprise rollouts; administrators control availability for their tenants.
What’s unchanged
- Microsoft is not removing OpenAI from Copilot. Instead, Copilot becomes an orchestration layer that routes requests to the model best suited by task, cost and compliance constraints. OpenAI remains central for many high‑complexity or frontier tasks while Microsoft’s own models are also part of the backend mix.
The Anthropic models Microsoft is adding: quick technical snapshot
Anthropic released theClaude 4
generation in May 2025, which introduced two principal variants relevant to Microsoft:Claude Sonnet 4
— a midsize, production‑oriented model positioned for high‑volume tasks that require a balance of responsiveness, cost efficiency and structured outputs (examples: slide generation, spreadsheet transformations, short‑to‑medium reasoning). Sonnet 4 has been broadly available through Anthropic’s API and on cloud marketplaces such as Amazon Bedrock and Google Vertex AI since mid‑2025.Claude Opus 4.1
— an iterative upgrade toOpus 4
focused on frontier reasoning, agentic search and coding tasks, with improvements in multi‑step reasoning and code precision. Opus 4.1 was announced and made available in August 2025 and is targeted at workloads that demand deeper, more meticulous reasoning and agent behavior. Anthropic documents Opus 4.1 as having a large context window (200K tokens in baseline releases) and agentic enhancements useful for complex workflows.
Why Microsoft is diversifying: product, economic and strategic drivers
1. Product fit: “right model for the right task”
Benchmarks and internal comparisons consistently show different models excel on different classes of tasks. Anthropic’s Sonnet family has been positioned for strong performance on structured, high‑throughput tasks like spreadsheet automation or slide layout — tasks common inside Microsoft 365 workflows — while Opus emphasizes deeper reasoning and agentic workflows. Routing workloads to the best fit can yield measurable quality improvements for users.2. Cost and performance at scale
Running so many Copilot inferences across Microsoft’s global install base is expensive. Lighter, task‑optimized models like Sonnet 4 have a lower per‑call compute cost than frontier models. Strategic routing reduces cost-per‑task, preserves response latency and helps Microsoft maintain or improve margins while continuing to deliver high‑quality experiences.3. Vendor risk and bargaining leverage
A single‑vendor reliance at the scale Microsoft operates creates dependency and negotiation exposure. Diversifying suppliers — and increasing options for hosting and routing — reduces single‑point risk and gives Microsoft leverage in long‑term partnerships with OpenAI and others. Adding Anthropic is a visible hedge while Microsoft continues investing in its own MAI model family.The cloud plumbing: cross‑cloud inference, billing and data flows
A key operational detail is that Anthropic’s enterprise deployments are commonly hosted on AWS and are available via Amazon Bedrock and other cloud marketplaces. That means Microsoft will often call Anthropic models hosted outside of Azure and may pay AWS or other cloud partners for those calls, introducing cross‑cloud inference and billing flows. Microsoft’s official guidance confirms Anthropic models will run on third‑party clouds (AWS/Google) and be subject to Anthropic’s terms and conditions.This cross‑cloud approach has several implications:
- Data residency and egress: Calls routed to Anthropic may traverse networks and jurisdictions outside a tenant’s primary Azure environment. Administrators must examine data residency, egress, and compliance settings before enabling Anthropic models.
- Billing flow complexity: When Copilot calls an Anthropic model hosted on AWS, the financial and contractual flows may involve third‑party billing. Microsoft has said end‑user pricing for Copilot will not change immediately, but the billing mechanics between Microsoft, Anthropic and cloud hosts are operational details enterprises should clarify.
- Latency and routing optimization: Cross‑cloud calls can increase latency if the nearest inference endpoint is not co‑located with the tenant’s primary workloads. Microsoft’s orchestration layer will need to balance latency, cost and capability when choosing backends.
Enterprise governance, security and admin controls
Microsoft is explicit that administrators must approve Anthropic models for tenant use and that model usage is subject to Anthropic’s terms. This administrative gate is an important control for large organizations managing compliance, data protection and internal policy.Admins need to focus on a few concrete areas:
- Enablement policy: Adopt a controlled pilot process — enable Anthropic models for a small set of test users or sandbox tenants before widely rolling out.
- Data classification and filter rules: Identify which data classes (PHI, PII, regulated records) may not be routed to third‑party clouds or models. Use Microsoft’s administrative controls and DLP tooling to block or quarantine sensitive prompts or documents.
- Contractual terms and SLAs: Verify the legal and commercial terms that apply when Microsoft’s Copilot calls Anthropic models — especially with cross‑cloud hosting involved.
- Logging and auditing: Ensure Copilot telemetry records which model served each request so security teams can trace outputs and audit behavior.
Strategic consequences for Microsoft, OpenAI and Anthropic
For Microsoft
This move signals Microsoft’s pivot from a single‑source Copilot to a multi‑model orchestration strategy. That approach preserves the benefits of specialized models while reducing dependency risks and optimizing costs. It also positions Microsoft as a platform that lets enterprises choose model diversity — potentially strengthening the commercial appeal of Azure and Microsoft 365 as neutral marketplaces for enterprise AI.For OpenAI
OpenAI remains a key partner but this diversification reduces Microsoft’s public reliance on a single external provider. That creates commercial leverage and product flexibility but also introduces the need to maintain high standards in OpenAI‑based experiences so customers still perceive value in those backends.For Anthropic
Inclusion in Microsoft 365 Copilot is a major enterprise validation for Anthropic. It accelerates Anthropic’s reach into business workflows at scale and is a commercial win that complements Anthropic’s availability in cloud marketplaces like AWS Bedrock and Google Vertex AI. The partnership also pushes Anthropic to meet enterprise SLAs and compliance expectations at scale.Risks, unknowns and caveats
While the technical direction is sensible, several important details are unconfirmed or require scrutiny:- Routing rules and transparency: Microsoft has said a router will pick the best model for a task, but the exact routing policies, weighting for latency vs quality, and transparency to users/administrators are not fully public. This matters for reproducibility and forensics when Copilot outputs are later audited. Flag: unverifiable until Microsoft publishes routing policy details.
- Contractual duration and pricing impacts: Early reporting suggests end‑user Copilot pricing will not change immediately, but long‑term pricing dynamics and passthroughs between Microsoft, Anthropic and cloud hosts (AWS/Google) could alter cost structures. Administrators should verify contractual details.
- Data protection and compliance: Cross‑cloud calls may create new regulatory exposures in regions with strict data sovereignty rules. Enterprises in regulated sectors must assess whether Anthropic model use is acceptable under their compliance frameworks.
- Performance variability and QA: Different models will produce different outputs for the same prompt. Orchestrating consistent, predictable behavior across heterogeneous backends requires substantial testing, prompt engineering, and guardrails inside enterprise deployments.
- Dependence on third‑party cloud hosting: Relying on Anthropic models hosted on AWS or Google exposes Microsoft and its customers to availability and geopolitical dependencies outside Azure’s control — an operational and strategic tradeoff.
Practical checklist for IT decision makers
- Review admin controls: confirm how to enable/disable Anthropic models in your tenant and who needs approval.
- Pilot with non‑sensitive workloads: choose a narrow set of teams (e.g., marketing decks, non‑PII research) to validate Sonnet/Opus outputs and operator workflows.
- Update DLP and classification policies: block or tag sensitive content to prevent accidental cross‑cloud inference.
- Audit telemetry and logging: ensure model provenance (which model served the request) is captured for compliance and troubleshooting.
- Clarify contractual terms: ask Microsoft (and when appropriate, Anthropic) for SLAs, data processing agreements and indemnities related to model hosting and inference.
How this fits into the broader enterprise AI landscape
Microsoft’s Copilot move is the clearest public signal yet that enterprise AI is entering a multi‑model phase. Vendors will increasingly offer orchestration layers that let enterprises mix and match models for capability, cost and compliance. The winners will be platforms that can hide complexity from users while offering administrators clear governance, predictable costs and provable audit trails. Anthropic’s inclusion accelerates that transition by demonstrating enterprise appetite for choice beyond the biggest single provider.Short‑term outlook and likely next steps
- Expect Microsoft to extend Anthropic support gradually beyond Researcher and Copilot Studio into other high‑value Copilot experiences where Sonnet’s strengths are most evident (for example, Excel automations, PowerPoint design assistance and select Teams workflows). Early reporting and internal testing indicate those are plausible next targets.
- Microsoft will continue to invest in its in‑house models (MAI series) and in further integrations with other third‑party models. Copilot’s future is likely to be a curated, workload‑specific mix of in‑house, OpenAI, Anthropic and other specialized models.
- Enterprises will rapidly develop internal best practices for model selection, monitoring and governance. Vendors that provide strong observability and policy controls will gain traction in the IT procurement process.
Final analysis: what matters for WindowsForum readers and IT professionals
This is a pragmatic, consequential engineering and commercial decision by Microsoft that aligns product performance with the realities of scale. For end users the immediate difference may be subtle: Copilot will still look and feel like Copilot. For IT leaders, procurement teams and security professionals the difference is material: you now have to manage model choice as a new axis of policy — deciding which model families are allowed, for which data classes and which business functions.Key takeaways:
- Choice is now built into Copilot — Researcher and Copilot Studio permit Anthropic models alongside OpenAI and Microsoft engines.
- Expect cross‑cloud inference — Anthropic models are commonly hosted in AWS/Google clouds; this introduces data‑flow and billing considerations.
- Governance matters more than ever — Admins must pilot carefully, codify DLP and data residency rules, and insist on clear logging and contractual protections.
- The orchestration era begins — The industrialization of AI inside productivity software moves from single‑provider hero models to multi‑vendor ecosystems where orchestration, instrumentation and governance determine winners.
Conclusion
Adding Anthropic’s
Claude Sonnet 4
and Claude Opus 4.1
to Microsoft 365 Copilot marks a deliberate shift toward multi‑model orchestration that balances capability, cost and vendor risk. The change is immediately useful for building and customizing agents and for deep‑reasoning Researcher workflows, but it also raises nontrivial governance, data residency and billing questions that enterprises must address. Microsoft’s public documentation and industry reporting make the high‑level contours clear, yet several operational details remain to be verified by tenants through pilots and contractual review. For organizations that adopt Copilot seriously, model choice has become another dimension to master — and those that plan deliberately will extract the most value from this next phase of productivity AI. Source: The Verge Microsoft embraces OpenAI rival Anthropic to improve Microsoft 365 apps
Source: Neowin Microsoft 365 Copilot is ditching OpenAI exclusivity for Anthropic's models
Source: OODA Loop Microsoft embraces OpenAI rival Anthropic to improve Microsoft 365 apps
Source: The Economic Times Microsoft brings Anthropic AI models to 365 Copilot, diversifies beyond OpenAI - The Economic Times
Source: The Edge Malaysia Microsoft partners with OpenAI rival Anthropic on AI Copilot
Source: CNBC https://www.cnbc.com/2025/09/24/microsoft-adds-anthropic-model-to-microsoft-365-copilot.html
Source: Microsoft Expanding model choice in Microsoft 365 Copilot | Microsoft 365 Blog