Microsoft’s Copilot strategy has crossed a line that matters for every serious Microsoft 365 customer: the company is no longer treating Copilot as a clever drafting layer, but as an agentic execution platform that can plan, act, and return finished work across Word, Excel, PowerPoint, Outlook, Teams, and the broader Microsoft 365 stack. The most important shift is not just the introduction of Anthropic Claude inside Microsoft’s productivity ecosystem, but the pairing of that model diversity with Work IQ, Agent 365, and the broader Frontier program. Taken together, those moves redefine Copilot from a feature into an operating model for workplace AI. s://www.microsoft.com/en-us/microsoft-365/blog/2026/03/09/powering-frontier-transformation-with-copilot-and-agents/)
Microsoft has spent the last three years turning Copilot from a chat box into a product family, and the latest wave of updates shows how far that plan has progressed. At launch, Microsoft 365 Copilot was mainly about productivity acceleration: summarize meetings, draft email, generate slides, and surface context from work data. That was ambitious enough, but it still assumed a human in the loop at every major step. The new direction assumes something more consequential: that AI can manage multi-step work over time, with permissions, memory, and governance wrapped around it.
The clearest sign of that shift is Microsoft’s embrace of multiple models rather than a single default. Official Microsoft documentation now says organizations can elect to use Anthropic models in Researcher and Copilot Studio, with Claude appearing in Microsoft 365 Copilot and Copilot Studio as part of a gradual rollout. Microsoft also notes that Anthropic models in Microsoft offerings are subject to specific data and processing constraints, which tells you this is not a cosmetic change; it is a platform-level rethinking of how AI services are governed inside the enterprise.
That matters because Microsoft’s earlier Copilot story was built around OpenAI-first integration. Now the company is openly pursuing model choice, and that changes both the technical stack and the commercial narrative. Customers no longer have to imagine Copilot as a monolith; they can see it as an orchestration layer where Microsoft chooses models, routes tasks, and applies security and compliance controls. In enterprise software, that distinction is huge.
The timing also matters. Microsoft’s March 2026 announcements around Frontier, Wave 3, and long-running work patterns suggest a deliberate move to make AI feel less like a novelty and more like infrastructure. The company says Wave 3 brings Claude into mainline Copilot chat for Frontier users, while Work IQ and Agent 365 provide the intelligence and governance layers needed to scale agents safely. That combination is not just feature bundling; it is a strategic attempt to make Microsoft 365 the control center of enterprise AI adoption.
Over time, Microsoft started adding pieces that pointed toward something more autonomous. Copilot Studio enabled organizations to build custom agents. Researcher turned Copilot into a more capable synthesis engine. Model choice arrived later, with Anthropic support extending beyond the original OpenAI-centric design. Those changes may have looked incremental individually, but collectively they laid the foundation for a platform that could coordinate work rather than just answer questions.
This is also why governance is becoming a first-class product story. If Copilot is doing real work, then identity, permissions, auditability, and model behavior all become operational concerns rather than background settings. Microsoft’s Agent 365 framing is important here because it treats agents the way IT already treats users, devices, and apps: as things that must be managed, trusted, restricted, and monitored at scale. That is a much more serious posture than “try our AI helper.”
Key milestones in the transition include:
This matters because the best productivity tools disappear into the workflow. If a user has to jump between apps or build a separate process to benefit from AI, adoption will remain limited. But when Copilot can be summoned inside the document, spreadsheet, or email thread, the friction drops dramatically. Microsoft understands that workplace AI wins not by being flashy, but by being unavoidable in the right places.
At the same time, model diversity introduces a new burden: consistency. If one model writes diferently, or handles edge cases differently, organizations will need stronger controls and clearer user guidance. Microsoft’s own documentation acknowledges phased rollout, admin approval, and use-case-specific availability, which is a reminder that model choice is not the same thing as model freedom. It is managed flexibility. (learn.microsoft.com)
Important implications of Wave 3 include:
This approach is smart for another reason: it mirrors how serious knowledge work actually happens. Good analysis is rarely the result of a single answer. It usually emerges through comparison, disagreement, and revision. By turning critique into a product feature, Microsoft is acknowledging that research agents need adversarial pressure, not just better prompts.
In enterprise settings, that oversight matters more than speed. A mediocre draft is annoying. A polished but wrong internal briefing can be expensive. If Critique and Council genuinely improve source discipline and argument quality, they could become one of the most useful Copilot updates of the year. If they do not, they will become yet another AI feature that looks sophisticated until someone asks for the evidence. That is the real test.
What makes these modes notable:
That direction is strategically important because most work does not begin as a finished document. It begins as fragments: meeting notes, links, rough ideas, references, and half-formed outlines. A notebook that can absorb that mess and then emit a decent first draft is far more valuable than a chatbot that merely answers questions. It becomes a bridge between thought and production.
This is also where Microsoft can differentiate against standalone AI tools. Competitors can generate text, but Microsoft controls the workspace where text becomes a work product. That gives it an enormous advantage, especially in enterprises that already live inside Microsoft 365. In practical terms, Copilot Notebooks may matter less as a flashy feature than as a quiet accelerant for routine professional output.
That is why the enterprise version of this story is stronger than the consumer one. Consumers may use Copilot opportunistically. Enterprises can embed it in process. When AI sits inside a repeatable workflow, it becomes harder to ignore and easier to justify. That is where Microsoft’s monetization story begins to look more durable.
This is where Microsoft’s enterprise credibility matters. A consumer AI app can get away with loose edges and fast iteration. Microsoft 365 cannot. Enterprises need policy hooks, admin workflows, tenant-level controls, and clear boundaries around what the model can see and do. Microsoft’s documentation around Anthropic access, admin approval, and subprocessor handling suggests that the company is trying to make the agentic future compatible with familiar enterprise compliance expectations.
There is also a deeper architectural implication. Agent governance turns Copilot from a single product into a platform of products and policies. That means Microsoft can sell not only AI capability, but also the machinery that keeps AI safe. For CIOs and CISOs, that is attractive because it promises fewer moving parts than assembling an AI stack from scratch. Less integration, less sprawl, fewer surprises.
For OpenAI, the implication is subtle but real. Microsoft still remains deeply tied to OpenAI in many areas, but model pluralism means Copilot is not beholden to a single provider story. That lowers platform risk for Microsoft and gives customers the sense that they are buying a managed ecosystem rather than a one-company bet. It is a sophisticated hedge.
Salesforce, meanwhile, faces a different challenge. It has strong agent branding and deep workflow credentials in CRM, but Microsoft is attacking from a broader angle: the general-purpose office stack. That means Copilot can become the daily interface for many tasks before users ever reach specialized business apps. The risk for competitors is not just feature parity; it is being normalized out of the workflow.
Competitive takeaways:
That is a practical reality, not a flaw. Enterprise AI rarely succeeds through default settings alone. It succeeds when IT can define who gets access, what data the model can use, which workflows are permitted, and how outputs are reviewed. Microsoft’s newer Copilot architecture appears to accept that truth and build around it rather than pretending governance is optional.
The upside is significant. If Copilot reduces time spent on drafting, formatting, summarization, and cross-app coordination, the productivity gains can compound fast. The downside is equally clear: without discipline, organizations may create a layer of AI-generated work that looks efficient but hides errors, bias, or stale assumptions. That risk is manageable, but only if leadership treats governance as a business process, not an IT checkbox.
The danger is that consumer users may overestimate reliability. When AI sits inside familiar apps, its outputs can feel more trustworthy than they are. A clean slide deck or polished memo can conceal weak logic. Microsoft’s new emphasis on critique and model selection is reassuring, but it does not eliminate the need for user judgment.
That is why Microsoft’s Copilot push is not just about AI novelty. It is about habit formation. Once users begin to expect documents, meetings, and emails to be AI-assisted by default, the old workflow feels slow. At that point, the product ceases to be optional in practice, even if it remains optional in theory.
The key question is whether users experience this as empowerment or overhead. A truly useful agent platform should reduce coordination costs, not merely relocate them. If Copilot requires constant supervision, users will retreat to manual workflows. If it can genuinely handle long-running work with clear controls, Microsoft will have created something far more valuable than an assistant.
What to watch next:
Source: Fathom Journal Fathom - For a deeper understanding of Israel, the region, and global antisemitism
Overview
Microsoft has spent the last three years turning Copilot from a chat box into a product family, and the latest wave of updates shows how far that plan has progressed. At launch, Microsoft 365 Copilot was mainly about productivity acceleration: summarize meetings, draft email, generate slides, and surface context from work data. That was ambitious enough, but it still assumed a human in the loop at every major step. The new direction assumes something more consequential: that AI can manage multi-step work over time, with permissions, memory, and governance wrapped around it.The clearest sign of that shift is Microsoft’s embrace of multiple models rather than a single default. Official Microsoft documentation now says organizations can elect to use Anthropic models in Researcher and Copilot Studio, with Claude appearing in Microsoft 365 Copilot and Copilot Studio as part of a gradual rollout. Microsoft also notes that Anthropic models in Microsoft offerings are subject to specific data and processing constraints, which tells you this is not a cosmetic change; it is a platform-level rethinking of how AI services are governed inside the enterprise.
That matters because Microsoft’s earlier Copilot story was built around OpenAI-first integration. Now the company is openly pursuing model choice, and that changes both the technical stack and the commercial narrative. Customers no longer have to imagine Copilot as a monolith; they can see it as an orchestration layer where Microsoft chooses models, routes tasks, and applies security and compliance controls. In enterprise software, that distinction is huge.
The timing also matters. Microsoft’s March 2026 announcements around Frontier, Wave 3, and long-running work patterns suggest a deliberate move to make AI feel less like a novelty and more like infrastructure. The company says Wave 3 brings Claude into mainline Copilot chat for Frontier users, while Work IQ and Agent 365 provide the intelligence and governance layers needed to scale agents safely. That combination is not just feature bundling; it is a strategic attempt to make Microsoft 365 the control center of enterprise AI adoption.
The Road to Agentic Copilot
Microsoft’s Copilot story began with a familiar promise: save time on routine work. It was a strong pitch because it sat directly on top of Microsoft 365’s most common tasks. Users could ask Copilot to summarize a meeting, extract action items, write a first draft, or turn a document into a presentation. Those are meaningful wins, but they are still bounded tasks. They reduce friction; they do not replace workflows.Over time, Microsoft started adding pieces that pointed toward something more autonomous. Copilot Studio enabled organizations to build custom agents. Researcher turned Copilot into a more capable synthesis engine. Model choice arrived later, with Anthropic support extending beyond the original OpenAI-centric design. Those changes may have looked incremental individually, but collectively they laid the foundation for a platform that could coordinate work rather than just answer questions.
From assistant to execution layer
The latest Copilot evolution is best understood as a move from assistance to execution. A drafting assistant helps you write faster; an execution layer can handle parts of the process that used to require repeated human intervention. That includes long-running research, multi-document synthesis, iterative revision, and eventually task completion across apps. Microsoft’s language around long-running, multi-step work is a clue that the company sees this as the next logical category, not just a demo feature.This is also why governance is becoming a first-class product story. If Copilot is doing real work, then identity, permissions, auditability, and model behavior all become operational concerns rather than background settings. Microsoft’s Agent 365 framing is important here because it treats agents the way IT already treats users, devices, and apps: as things that must be managed, trusted, restricted, and monitored at scale. That is a much more serious posture than “try our AI helper.”
Key milestones in the transition include:
- Chat-first Copilot for summarization, drafting, and retrieval.
- Copilot Studio for custom workflow automation.
- Researcher for deeper research and synthesis.
- Claude support for model diversity.
- Wave 3 / Frontier for multi-step, long-running work.
- Agent 365 for governance and fleet management of agents.
What’s New in Wave 3
Wave 3 is where Microsoft’s strategy becomes more visible to ordinary users. Microsoft says the latest generation of agentic experiences in Word, Excel, PowerPoint, and Outlook is powered by Work IQ, and that Claude is now available in mainline Copilot chat through Frontier. That means the AI layer is not isolated in a separate app; it is increasingly embedded into the core productivity surfaces people already use every day.This matters because the best productivity tools disappear into the workflow. If a user has to jump between apps or build a separate process to benefit from AI, adoption will remain limited. But when Copilot can be summoned inside the document, spreadsheet, or email thread, the friction drops dramatically. Microsoft understands that workplace AI wins not by being flashy, but by being unavoidable in the right places.
Why model choice matters
The addition of Anthropic to Microsoft 365 Copilot is more than a vendor-neutral talking point. It gives Microsoft a way to position Copilot as a multi-model orchestration platform instead of a single-model dependency. That is valuable for performance tuning, resilience, and enterprise buyer confidence. It also creates room for Microsoft to ro different models depending on context, risk, or cost.At the same time, model diversity introduces a new burden: consistency. If one model writes diferently, or handles edge cases differently, organizations will need stronger controls and clearer user guidance. Microsoft’s own documentation acknowledges phased rollout, admin approval, and use-case-specific availability, which is a reminder that model choice is not the same thing as model freedom. It is managed flexibility. (learn.microsoft.com)
Important implications of Wave 3 include:
- More natural language-to-action flows inside Office apps.
- Deeper embedding of AI in daily work.
- A stronger case for Microsoft 365 as the AI operating layer.
- Higher demand for model governance and traceability.
- A likely increase in enterprise experimentation around agent workflows.
Researcher, Critique, and Council
One of the most telling parts of the current Copilot roadmap is Microsoft’s investment in making research output more reliable. The new Critique and Council modes for Researcher indicate that Microsoft is not satisfied with simply generating answee how those answers are tested, challenged, and refined. That is an important pivot because enterprise AI’s real problem is not fluency. It is trust.This approach is smart for another reason: it mirrors how serious knowledge work actually happens. Good analysis is rarely the result of a single answer. It usually emerges through comparison, disagreement, and revision. By turning critique into a product feature, Microsoft is acknowledging that research agents need adversarial pressure, not just better prompts.
The reliability problem
AI research tools fail in predictable ways. They overstate confidence, collapse nuance, and sometimes produce attractive but weakly sourced summaries. Microsoft’s response appears to be to formalize the review process inside the product itself, rather than leaving users to catch errors manually. That is a promising direction, but it is also a tacit admission that the system still needs oversight.In enterprise settings, that oversight matters more than speed. A mediocre draft is annoying. A polished but wrong internal briefing can be expensive. If Critique and Council genuinely improve source discipline and argument quality, they could become one of the most useful Copilot updates of the year. If they do not, they will become yet another AI feature that looks sophisticated until someone asks for the evidence. That is the real test.
What makes these modes notable:
- They shift focus from generation to validation.
- They reward multi-pass reasoning over instant answers.
- They align with enterprise needs for auditability.
- They can reduce overreliance on a single model output.
- They make Researcher feel more like a workflow than a chat response.
Copilot Notebooks and Knowledge Work
If Copilot is becoming an execution layer, Copilot Notebooks is where that ambition becomes visible in everyday knowledge work. Microsoft’s latest work around notebooks is pushing the feature beyond a scratchpad and into a structured workspace for collecting sources, organizing context, and turning raw material into reusable outputs. Internal and file-based evidence from the current reporting cycle suggests Microsoft is preparing notebooks to generate Word documents and PowerPoint presentations directly from notebook content, which would tighten the loop between research and deliverable.That direction is strategically important because most work does not begin as a finished document. It begins as fragments: meeting notes, links, rough ideas, references, and half-formed outlines. A notebook that can absorb that mess and then emit a decent first draft is far more valuable than a chatbot that merely answers questions. It becomes a bridge between thought and production.
From scratchpad to deliverable factory
The move from notebook to finished artifact reflects a larger productivity truth: users want fewer context switches. If research lives in one place and output in another, AI still feels like overhead. But if the same workspace can gather material, organize it, critique it, and render it into a deck or memo, adoption rises because the process becomes coherent.This is also where Microsoft can differentiate against standalone AI tools. Competitors can generate text, but Microsoft controls the workspace where text becomes a work product. That gives it an enormous advantage, especially in enterprises that already live inside Microsoft 365. In practical terms, Copilot Notebooks may matter less as a flashy feature than as a quiet accelerant for routine professional output.
Why the enterprise angle is stronger than the consumer story
Consumers like convenience, but enterprises pay for repeatability. A notebook feature that helps one person brainstorm is nice; a notebook feature that standardizes research workflows across a department is transformative. Microsoft’s emphasis on rollout timing and agent integration suggests the company sees notebook-driven content creation as a template for broader organizational use.That is why the enterprise version of this story is stronger than the consumer one. Consumers may use Copilot opportunistically. Enterprises can embed it in process. When AI sits inside a repeatable workflow, it becomes harder to ignore and easier to justify. That is where Microsoft’s monetization story begins to look more durable.
Agent 365 and the Governance Stack
Every time Microsoft adds more power to Copilot, it also adds more pressure on the governance layer. Agent 365 is Microsoft’s answer to that pressure. The company frames it as the control plane for agents, which is exactly the right metaphor: as agents multiply, someone has to manage identity, access, policy, telemetry, and lifecycle. Without that, “AI transformation” quickly becomes “security headache.”This is where Microsoft’s enterprise credibility matters. A consumer AI app can get away with loose edges and fast iteration. Microsoft 365 cannot. Enterprises need policy hooks, admin workflows, tenant-level controls, and clear boundaries around what the model can see and do. Microsoft’s documentation around Anthropic access, admin approval, and subprocessor handling suggests that the company is trying to make the agentic future compatible with familiar enterprise compliance expectations.
Governance is the product
That may sound dull, but it is actually one of the most important competitive advantages Microsoft has. If buyers trust the control plane, they will be more willing to turn on the AI. If they do not, every cool feature becomes a pilot that never scales. In other words, governance is not merely a safeguard; it is the sales argument.There is also a deeper architectural implication. Agent governance turns Copilot from a single product into a platform of products and policies. That means Microsoft can sell not only AI capability, but also the machinery that keeps AI safe. For CIOs and CISOs, that is attractive because it promises fewer moving parts than assembling an AI stack from scratch. Less integration, less sprawl, fewer surprises.
Competitive Pressure on OpenAI, Google, and Salesforce
Microsoft’s latest Copilot updates reshape the competitive field in three directions at once. First, they reduce dependence on a single model vendor by bringing Anthropic into the picture. Second, they deepen Microsoft 365’s moat by turning the app suite into an AI runtime. Third, they force rivals to answer a hard question: can they offer both intelligence and operational control at enterprise scale?For OpenAI, the implication is subtle but real. Microsoft still remains deeply tied to OpenAI in many areas, but model pluralism means Copilot is not beholden to a single provider story. That lowers platform risk for Microsoft and gives customers the sense that they are buying a managed ecosystem rather than a one-company bet. It is a sophisticated hedge.
Google Workspace and the productivity battle
Google’s AI story in Workspace has been strong on assistive features, but Microsoft is now framing the battle around workflow execution and governance, not just assistant quality. That is a subtle but powerful shift. If the buyer believes the real value lies in turning work into orchestrated agent flows, Microsoft has a natural advantage because it already owns the desktop productivity layer.Salesforce, meanwhile, faces a different challenge. It has strong agent branding and deep workflow credentials in CRM, but Microsoft is attacking from a broader angle: the general-purpose office stack. That means Copilot can become the daily interface for many tasks before users ever reach specialized business apps. The risk for competitors is not just feature parity; it is being normalized out of the workflow.
Competitive takeaways:
- Microsoft is competing on platform control, not just model quality.
- Anthropic support weakens the idea of a single-vendor AI stack.
- Agent governance becomes a differentiator, not a back-office detail.
- Office-native AI is harder to displace than standalone chat tools.
- The battleground is moving from “who answers best” to “who executes best.”
Enterprise Impact: What IT Teams Need to Know
For enterprise buyers, the big question is no longer whether Copilot can help employees. Ian be deployed safely, measured properly, and governed consistently across a mixed environment. Microsoft’s current documentation suggests the answer is increasingly yes, but only if administrators are willing to engage with the controls. Anthropic access must be enabled by admins, rollout is phased, and certain model features are still bounded by tenant and regional policies.That is a practical reality, not a flaw. Enterprise AI rarely succeeds through default settings alone. It succeeds when IT can define who gets access, what data the model can use, which workflows are permitted, and how outputs are reviewed. Microsoft’s newer Copilot architecture appears to accept that truth and build around it rather than pretending governance is optional.
Adoption will depend on policy maturity
In most organizations, the technical challenge is solvable. The cultural challenge is harder. Teams need to trust the AI enough to use it, but distrust it enough to verify it. That balance is awkward, yet necessary. Microsoft’s Critique, Council, and Agent 365 investments are essentially a toolkit for teaching companies how to live with that tension.The upside is significant. If Copilot reduces time spent on drafting, formatting, summarization, and cross-app coordination, the productivity gains can compound fast. The downside is equally clear: without discipline, organizations may create a layer of AI-generated work that looks efficient but hides errors, bias, or stale assumptions. That risk is manageable, but only if leadership treats governance as a business process, not an IT checkbox.
Consumer Impact and Everyday Workflow Changes
Consumer-facing Copilot changes may not carry the same governance weight, but they can still alter daily habits. If users can move from rough notes to structured documents, from meeting chatter to action items, and from research fragments to presentation drafts without leaving Microsoft 365, the software becomes less a tool and more a companion layer. That is emotionally important because convenience shapes loyalty.The danger is that consumer users may overestimate reliability. When AI sits inside familiar apps, its outputs can feel more trustworthy than they are. A clean slide deck or polished memo can conceal weak logic. Microsoft’s new emphasis on critique and model selection is reassuring, but it does not eliminate the need for user judgment.
Small workflow changes, big behavioral effects
The most powerful consumer changes are often the smallest. A notebook that turns into a deck. A research result that can be re-checked. A chat session that can route to a better model. A note that becomes a draft without copy-paste friction. Those details add up to a meaningfully different relationship with software.That is why Microsoft’s Copilot push is not just about AI novelty. It is about habit formation. Once users begin to expect documents, meetings, and emails to be AI-assisted by default, the old workflow feels slow. At that point, the product ceases to be optional in practice, even if it remains optional in theory.
Strengths and Opportunities
Microsoft’s current Copilot direction has several obvious strengths. It aligns the company’s core productivity stack with the most important AI trend in enterprise software: agents that do work, not just discuss it. It also gives Microsoft multiple levers to improve adoption, from model choice to governance to tighter app integration. That combination is rare, and it is exactly why the strategy feels bigger than a feature refresh.- Deep Microsoft 365 integration makes AI feel native rather than bolted on.
- Anthropic model support broadens customer choice and reduces platform dependence.
- Work IQ creates a smarter context layer for agentic behavior.
- Agent 365 gives IT a credible control plane for AI governance.
- Researcher Critique and Council address trust, not just speed.
- Copilot Notebooks can shorten the path from research to deliverable.
- Enterprise bundling may accelerate adoption in large deployments.
Risks and Concerns
The biggest risks are the ones that come with any serious agent platform: hallucinations, over-automation, policy complexity, and uneven rollout. Microsoft can add as many governance features as it wants, but if users do not understand when the model is acting, what it can access, and how it reasons, trust will remain brittle. The more powerful Copilot becomes, the more expensive its mistakes can be.- Hallucinated outputs could be more damaging when they are polished and embedded in workflow.
- Permission sprawl may arise if agents are given broad access without tight controls.
- Model inconsistency could confuse users when outputs vary by model or task.
- Admin complexity may slow enterprise deployment despite the product’s promise.
- Regional and compliance constraints could limit global consistency.
- Automation bias may encourage users to accept AI output too quickly.
- Pricing pressure could make advanced Copilot tiers harder to justify broadly.
Looking Ahead
The next phase of Copilot will likely be defined less by headline announcements and more by how much of the workflow Microsoft can quietly absorb. If notebooks become document factories, Researcher becomes more reliable, and Agent 365 proves manageable at scale, Copilot could evolve into the default interface for enterprise knowledge work. That would be a big deal not just for Microsoft, but for the entire productivity software market.The key question is whether users experience this as empowerment or overhead. A truly useful agent platform should reduce coordination costs, not merely relocate them. If Copilot requires constant supervision, users will retreat to manual workflows. If it can genuinely handle long-running work with clear controls, Microsoft will have created something far more valuable than an assistant.
What to watch next:
- Wider rollout of Claude in Researcher and mainline Copilot chat.
- Broader availability of Agent 365 and its admin controls.
- More direct document creation from Copilot Notebooks.
- Additional model-choice features across Microsoft 365.
- Evidence that Critique and Council improve real research quality.
- Signs that enterprises are moving from pilots to production deployments.
Source: Fathom Journal Fathom - For a deeper understanding of Israel, the region, and global antisemitism
Similar threads
- Featured
- Article
- Replies
- 0
- Views
- 10
- Article
- Replies
- 0
- Views
- 23
- Featured
- Article
- Replies
- 2
- Views
- 24
- Featured
- Article
- Replies
- 0
- Views
- 18
- Article
- Replies
- 0
- Views
- 26