Microsoft made Copilot Cowork available in the Microsoft 365 Copilot mobile app for iOS and Android on May 5, 2026, while adding reusable Cowork Skills and third-party plugin support for organizations testing the agent through its Frontier preview program. The move is easy to summarize and harder to dismiss: Microsoft is turning Copilot from a chat box into a delegated worker that follows you off the desktop. That does not mean the age of autonomous office labor has arrived fully formed. It means Microsoft has decided the next battle for enterprise AI will be fought over where agents can act, how they remember procedures, and which business systems they are allowed to touch.
The mobile launch matters because Copilot Cowork is not another text-generation feature tucked into Word or Teams. It is a long-running agent meant to plan and execute multi-step work across Microsoft 365: draft documents, schedule meetings, send messages, research internal material, and produce files while the user supervises. Putting that workflow in the Microsoft 365 Copilot mobile app changes the posture from “sit down and prompt” to “delegate and check in.”
That distinction is not cosmetic. The original Copilot pitch was assistance at the point of work: summarize this thread, rewrite this paragraph, generate this slide. Cowork asks for a different kind of trust. It wants users to hand over a bundle of intent and let the system decide which tools, files, and intermediate steps are needed to get there.
Mobile access is Microsoft’s bet that agentic work will not behave like desktop productivity software. A user may ask for a briefing in the morning, approve a draft email between meetings, reject a calendar change at lunch, and review an output file on the train home. The phone becomes the control surface for a worker that is supposed to keep moving even when the human is not parked in front of Outlook.
That is the appeal, and also the risk. When an assistant merely drafts a paragraph, the blast radius is small. When an agent can send, schedule, post, reorganize files, and touch external systems through plugins, the mobile approval screen becomes a governance boundary.
The office is not a clean benchmark. It is a swamp of permissions, stale SharePoint sites, conflicting calendar norms, half-documented processes, duplicate files, ambiguous acronyms, and employees who use Teams chats as unofficial systems of record. An agent that performs impressively in a demo can fail in fascinating ways once it meets a decade of accumulated enterprise entropy.
Microsoft’s preview language gives the company room to change behavior, restrict capability, or pull back from edge cases. IT departments should read that not as a warning to ignore Cowork, but as a warning not to confuse experimentation with deployment. Frontier is where organizations learn what they would need to govern before they scale.
That is probably the right place for Cowork to be. The product category is still immature, and Microsoft is asking enterprises to test a new operating model for work, not merely a new button in the ribbon. A cautious preview is less exciting than a broad launch, but it is far more honest.
Cowork complicates that narrative. By bringing Anthropic’s Claude Cowork concepts and model strengths into Microsoft 365 Copilot, Microsoft is signaling that enterprise AI will be multi-model by necessity. The winner is not the company with ideological loyalty to one lab; it is the company that can route work through the best available model while wrapping it in enterprise identity, permissions, compliance, and application context.
That is a very Microsoft move. The company does not need every breakthrough to originate in Redmond or even from its closest partner. It needs those breakthroughs to become more useful once they are inside Microsoft 365, governed by Entra identities, grounded in Graph data, and packaged for IT procurement.
The collaboration also says something about the agent market. Anthropic has been particularly strong in workflows that involve tool use, coding, structured reasoning, and long-running task execution. Microsoft’s decision to lean on that work for Cowork suggests the company sees agent behavior as a distinct technical challenge from chat quality. A fluent assistant is not automatically a reliable coworker.
The basic argument is simple: public internet knowledge is not enough for workplace AI. A useful agent needs to know which files matter, who owns a project, what meetings created the decision trail, which SharePoint location is authoritative, and which business system contains the live record. That understanding cannot come from a foundation model alone.
This is where Microsoft’s old enterprise advantages come roaring back. The company already sits across email, calendars, documents, chats, meetings, identity, device management, security logs, and increasingly business applications through Dynamics and Power Platform. If Work IQ can make that substrate intelligible to agents without blowing up permissions, Microsoft has something far harder to copy than a clever chatbot.
But Work IQ is also where the trust burden concentrates. The more context Cowork has, the more useful it becomes. The more useful it becomes, the more sensitive the access questions get. Enterprises will want to know not only what the agent can see, but why it used a particular document, whether it respected information barriers, and how its actions can be audited after the fact.
That shifts the center of gravity from ad hoc prompting to process design. The early Copilot era rewarded employees who could phrase requests well. The agent era will reward teams that can describe their work clearly enough for machines to repeat it. In a grimly amusing way, AI may finally force organizations to document the workflows they have been improvising for years.
Microsoft says Cowork includes built-in skills for common Microsoft 365 tasks such as Word, Excel, PowerPoint, PDF handling, email, scheduling, calendar management, meetings, daily briefing, enterprise search, communications, deep research, and adaptive cards. Custom skills can also be stored in OneDrive using Markdown instructions, giving users and teams a lightweight way to teach the agent their own routines.
That design is clever because it meets workers where they already live. OneDrive may not be glamorous infrastructure, but it is familiar, permissioned, and already part of the Microsoft 365 fabric. A skill stored as a file is easier for many organizations to understand than a custom agent built in a developer tool.
Still, skills introduce their own governance headache. A bad prompt is ephemeral; a bad skill is reusable. If an employee creates a skill that mishandles customer data, uses the wrong source of truth, or sends status updates with the wrong assumptions, the error can become operationalized. IT and business owners will need to decide which skills are personal conveniences and which are sanctioned business processes.
The first wave of plugin names tells us what Microsoft is aiming for. HubSpot brings customer and marketing workflows into view. LSEG, Moody’s, and S&P Global Energy point toward finance, risk, market intelligence, and industry-specific analysis. Notion, Miro, and monday.com suggest collaboration and project-management terrain where Microsoft competes but cannot pretend it has universal ownership.
This is the old platform play in new clothing. Microsoft does not need to own every system of record if Copilot becomes the agentic layer that can reason across them. The agent becomes the front door, the workflow broker, and eventually the place where users expect business systems to respond to natural language instructions.
That is why plugin support is more than a feature checkbox. It is the beginning of a marketplace question: whose agents get to act on which systems, under whose policies, and with whose audit trail? If Copilot Cowork becomes the trusted agent inside Microsoft 365 tenants, third-party vendors may find themselves needing Copilot integration the way they once needed Office export, Outlook sync, or Teams apps.
The competitive implication is blunt. Microsoft is trying to make its productivity suite the operating environment for enterprise agents. Plugins let the company extend that ambition beyond the Microsoft estate without forcing every customer to abandon the tools they already use.
This approval model fits the current state of AI reliability. Agents can be useful without being infallible if they are constrained by checkpoints. A draft email can be reviewed. A calendar action can be accepted or rejected. A Teams post can pause before it becomes visible to a channel full of colleagues.
The challenge is that approval fatigue is real. If Cowork asks too often, users will stop delegating meaningful work. If it asks too rarely, IT will worry that the agent is one hallucinated inference away from a business incident. The correct balance will vary by task, department, data sensitivity, and organizational culture.
Microsoft’s risk-level framing for actions is an attempt to make that balance manageable. Medium- and high-risk actions need different treatment from low-risk document generation. Over time, the most mature deployments will likely define policies that distinguish between drafting, changing, sending, deleting, publishing, purchasing, and updating external records.
That policy layer may become more important than model quality. Enterprises do not merely need smarter agents; they need agents that know when not to act.
This is not a reason to reject mobile access. It is a reason to design for the reality of mobile behavior. Approval prompts need enough context to support a meaningful decision without burying the user in tiny-screen complexity. The system must show what Cowork is about to do, which data informed it, and what the consequence will be.
Device security also matters. If the Microsoft 365 Copilot mobile app becomes a control panel for agents that can send emails, schedule meetings, create documents, and activate plugins, then mobile device management is no longer just about protecting access to content. It is about protecting access to delegated action.
That brings Cowork squarely into the world WindowsForum readers know well: conditional access, app protection policies, identity governance, audit logs, data loss prevention, and least-privilege design. The agent may be new, but the administrative instincts are old. Trust the user, verify the device, constrain the app, log the action.
Microsoft has an advantage because it can connect Cowork to the existing Microsoft 365 security stack. But integration is not the same as configuration. Many organizations will discover that their AI readiness depends on whether their identity, data classification, and information governance practices were already in decent shape.
Taken together, these pieces suggest that Microsoft sees the future of Copilot less as a single assistant and more as an ecosystem of supervised agents. Some will be personal. Some will be departmental. Some will be embedded in business applications. Some will be built by developers, while others will be assembled from natural-language skills and approved plugins.
This is why the Cowork launch should not be judged solely by whether it can perfectly execute a demo task today. Early agents will be uneven. They will fail on ambiguity, expose process gaps, and occasionally make users wonder whether it would have been faster to do the work manually. The strategic question is whether the scaffolding is forming around them.
Microsoft appears to be building that scaffolding with unusual speed. Agent 365, Work IQ, Copilot Studio, Microsoft Graph, MCP-oriented tooling, plugins, and Frontier previews all point toward an enterprise AI platform in which agents are managed assets rather than novelty bots. Cowork is one of the more visible expressions of that shift because it has a simple human metaphor: delegate work to it and watch what happens.
That metaphor is powerful, perhaps too powerful. A coworker has judgment, accountability, institutional memory, and social awareness. Cowork has models, context, tools, and approval prompts. The gap between those two realities is where both the opportunity and the disappointment will live.
The first evaluation should be narrow. Pick workflows that are repetitive, document-heavy, and reviewable. Weekly summaries, meeting preparation, internal research packets, customer-account briefings, and draft communications are better candidates than high-stakes record updates or anything involving regulated decisions.
The second evaluation should be organizational, not just technical. Cowork will expose which processes are well understood and which are tribal knowledge wrapped in calendar invites. If the agent cannot follow a workflow, the problem may be the model. It may also be that the workflow was never actually defined.
The third evaluation should focus on evidence. Can administrators see what Cowork accessed? Can users understand why it made a recommendation? Can approvals be audited? Can plugins be limited by role, group, or sensitivity? Can skills be reviewed, versioned, retired, or promoted from personal experiments to sanctioned procedures?
Those questions are not anti-AI. They are what separates a pilot from a platform.
The PC will remain where much knowledge work is created, reviewed, and finished. The phone will increasingly be where agent work is supervised. The browser and desktop app will be where complex tasks are shaped. The administrative plane will be where IT decides which identities, devices, data classes, plugins, and skills are allowed to interact.
Windows itself may not be the headline, but Microsoft’s traditional model of managed computing is all over this launch. Cowork assumes a world in which organizations can define access, enforce policy, and trust Microsoft’s cloud to mediate action across applications. That is the enterprise bargain Microsoft has been selling for decades, now applied to AI labor.
The more agents can do, the more endpoint posture matters. A compromised account is bad. A compromised account with a trusted agent that can act across mail, files, calendars, Teams, and third-party systems is worse. The security model must evolve from protecting data access to protecting delegated agency.
That is a subtle but profound shift. The next generation of IT incidents may not begin with a user downloading the wrong attachment. They may begin with a user approving the wrong agent action.
The open question is whether Microsoft can make that interface dependable enough for ordinary business. If it can, Copilot Cowork will look less like a feature inside Microsoft 365 and more like an early draft of the managed AI workforce Microsoft wants every tenant to adopt. If it cannot, Cowork will become another impressive preview that teaches enterprises where the boundary between assistance and autonomy really belongs. Either way, the work of preparing for agentic computing has already started, and the organizations that treat this as an IT governance problem rather than a productivity novelty will have the advantage when the preview label finally comes off.
Source: Thurrott.com Microsoft's Copilot Cowork Agent Launches on Mobile and Adds Plugins Support
Microsoft Moves the Agent From the Desk to the Pocket
The mobile launch matters because Copilot Cowork is not another text-generation feature tucked into Word or Teams. It is a long-running agent meant to plan and execute multi-step work across Microsoft 365: draft documents, schedule meetings, send messages, research internal material, and produce files while the user supervises. Putting that workflow in the Microsoft 365 Copilot mobile app changes the posture from “sit down and prompt” to “delegate and check in.”That distinction is not cosmetic. The original Copilot pitch was assistance at the point of work: summarize this thread, rewrite this paragraph, generate this slide. Cowork asks for a different kind of trust. It wants users to hand over a bundle of intent and let the system decide which tools, files, and intermediate steps are needed to get there.
Mobile access is Microsoft’s bet that agentic work will not behave like desktop productivity software. A user may ask for a briefing in the morning, approve a draft email between meetings, reject a calendar change at lunch, and review an output file on the train home. The phone becomes the control surface for a worker that is supposed to keep moving even when the human is not parked in front of Outlook.
That is the appeal, and also the risk. When an assistant merely drafts a paragraph, the blast radius is small. When an agent can send, schedule, post, reorganize files, and touch external systems through plugins, the mobile approval screen becomes a governance boundary.
Frontier Is Microsoft’s Favorite Word for “Not Yet Safe Enough for Everyone”
Copilot Cowork remains a Frontier feature, which is Microsoft’s way of giving ambitious customers early access while keeping the product at arm’s length from general availability expectations. Frontier is not a consumer beta dressed in enterprise clothing; it is a proving ground for the company’s most aggressive AI ideas inside real organizations. That distinction matters because Cowork’s value can only be tested against messy corporate reality.The office is not a clean benchmark. It is a swamp of permissions, stale SharePoint sites, conflicting calendar norms, half-documented processes, duplicate files, ambiguous acronyms, and employees who use Teams chats as unofficial systems of record. An agent that performs impressively in a demo can fail in fascinating ways once it meets a decade of accumulated enterprise entropy.
Microsoft’s preview language gives the company room to change behavior, restrict capability, or pull back from edge cases. IT departments should read that not as a warning to ignore Cowork, but as a warning not to confuse experimentation with deployment. Frontier is where organizations learn what they would need to govern before they scale.
That is probably the right place for Cowork to be. The product category is still immature, and Microsoft is asking enterprises to test a new operating model for work, not merely a new button in the ribbon. A cautious preview is less exciting than a broad launch, but it is far more honest.
The Anthropic Collaboration Is a Signal, Not a Footnote
Microsoft’s close collaboration with Anthropic is one of the more revealing parts of the Cowork story. For years, Microsoft’s AI identity was nearly inseparable from OpenAI. Copilot was the commercial face of that partnership, and the company’s early advantage came from moving GPT models quickly into Microsoft 365, GitHub, Windows, and Azure.Cowork complicates that narrative. By bringing Anthropic’s Claude Cowork concepts and model strengths into Microsoft 365 Copilot, Microsoft is signaling that enterprise AI will be multi-model by necessity. The winner is not the company with ideological loyalty to one lab; it is the company that can route work through the best available model while wrapping it in enterprise identity, permissions, compliance, and application context.
That is a very Microsoft move. The company does not need every breakthrough to originate in Redmond or even from its closest partner. It needs those breakthroughs to become more useful once they are inside Microsoft 365, governed by Entra identities, grounded in Graph data, and packaged for IT procurement.
The collaboration also says something about the agent market. Anthropic has been particularly strong in workflows that involve tool use, coding, structured reasoning, and long-running task execution. Microsoft’s decision to lean on that work for Cowork suggests the company sees agent behavior as a distinct technical challenge from chat quality. A fluent assistant is not automatically a reliable coworker.
Work IQ Is the Real Product Microsoft Is Selling
Microsoft describes Cowork as built on Work IQ, its intelligence layer for understanding organizational data, tools, relationships, and work patterns. That phrase can sound like marketing fog until you look at what Microsoft is trying to assemble. Work IQ is the connective tissue that turns Copilot from a model interface into an enterprise control plane.The basic argument is simple: public internet knowledge is not enough for workplace AI. A useful agent needs to know which files matter, who owns a project, what meetings created the decision trail, which SharePoint location is authoritative, and which business system contains the live record. That understanding cannot come from a foundation model alone.
This is where Microsoft’s old enterprise advantages come roaring back. The company already sits across email, calendars, documents, chats, meetings, identity, device management, security logs, and increasingly business applications through Dynamics and Power Platform. If Work IQ can make that substrate intelligible to agents without blowing up permissions, Microsoft has something far harder to copy than a clever chatbot.
But Work IQ is also where the trust burden concentrates. The more context Cowork has, the more useful it becomes. The more useful it becomes, the more sensitive the access questions get. Enterprises will want to know not only what the agent can see, but why it used a particular document, whether it respected information barriers, and how its actions can be audited after the fact.
Skills Turn Prompting Into Process
The addition of Cowork Skills is more important than it sounds. A skill is essentially a reusable set of instructions that tells Cowork how to perform a particular kind of work. Instead of writing the same elaborate prompt every Friday for a weekly report, a user or organization can encode the preferred process once and let the agent reuse it.That shifts the center of gravity from ad hoc prompting to process design. The early Copilot era rewarded employees who could phrase requests well. The agent era will reward teams that can describe their work clearly enough for machines to repeat it. In a grimly amusing way, AI may finally force organizations to document the workflows they have been improvising for years.
Microsoft says Cowork includes built-in skills for common Microsoft 365 tasks such as Word, Excel, PowerPoint, PDF handling, email, scheduling, calendar management, meetings, daily briefing, enterprise search, communications, deep research, and adaptive cards. Custom skills can also be stored in OneDrive using Markdown instructions, giving users and teams a lightweight way to teach the agent their own routines.
That design is clever because it meets workers where they already live. OneDrive may not be glamorous infrastructure, but it is familiar, permissioned, and already part of the Microsoft 365 fabric. A skill stored as a file is easier for many organizations to understand than a custom agent built in a developer tool.
Still, skills introduce their own governance headache. A bad prompt is ephemeral; a bad skill is reusable. If an employee creates a skill that mishandles customer data, uses the wrong source of truth, or sends status updates with the wrong assumptions, the error can become operationalized. IT and business owners will need to decide which skills are personal conveniences and which are sanctioned business processes.
Plugins Are Where Cowork Stops Being an Office Trick
Third-party plugin support is the point at which Cowork becomes strategically interesting. Microsoft 365 is powerful, but most businesses do not run entirely inside Microsoft 365. Customer records, market data, product roadmaps, legal workflows, project plans, financial intelligence, and operational dashboards often live elsewhere.The first wave of plugin names tells us what Microsoft is aiming for. HubSpot brings customer and marketing workflows into view. LSEG, Moody’s, and S&P Global Energy point toward finance, risk, market intelligence, and industry-specific analysis. Notion, Miro, and monday.com suggest collaboration and project-management terrain where Microsoft competes but cannot pretend it has universal ownership.
This is the old platform play in new clothing. Microsoft does not need to own every system of record if Copilot becomes the agentic layer that can reason across them. The agent becomes the front door, the workflow broker, and eventually the place where users expect business systems to respond to natural language instructions.
That is why plugin support is more than a feature checkbox. It is the beginning of a marketplace question: whose agents get to act on which systems, under whose policies, and with whose audit trail? If Copilot Cowork becomes the trusted agent inside Microsoft 365 tenants, third-party vendors may find themselves needing Copilot integration the way they once needed Office export, Outlook sync, or Teams apps.
The competitive implication is blunt. Microsoft is trying to make its productivity suite the operating environment for enterprise agents. Plugins let the company extend that ambition beyond the Microsoft estate without forcing every customer to abandon the tools they already use.
The Human Approval Loop Is Doing a Lot of Work
Microsoft emphasizes that Cowork asks users to approve important actions before they happen. That is not just a usability choice; it is the mechanism that makes the whole proposition politically acceptable. The agent can plan and prepare, but the human remains the accountable actor at the moment of consequence.This approval model fits the current state of AI reliability. Agents can be useful without being infallible if they are constrained by checkpoints. A draft email can be reviewed. A calendar action can be accepted or rejected. A Teams post can pause before it becomes visible to a channel full of colleagues.
The challenge is that approval fatigue is real. If Cowork asks too often, users will stop delegating meaningful work. If it asks too rarely, IT will worry that the agent is one hallucinated inference away from a business incident. The correct balance will vary by task, department, data sensitivity, and organizational culture.
Microsoft’s risk-level framing for actions is an attempt to make that balance manageable. Medium- and high-risk actions need different treatment from low-risk document generation. Over time, the most mature deployments will likely define policies that distinguish between drafting, changing, sending, deleting, publishing, purchasing, and updating external records.
That policy layer may become more important than model quality. Enterprises do not merely need smarter agents; they need agents that know when not to act.
Mobile Approval Makes IT’s Governance Problem More Urgent
The mobile interface sounds empowering, and for individual users it probably will be. But for IT administrators, mobile agent control raises a familiar set of worries in a new form. The same employee who would carefully review a desktop dialog at 10 a.m. may rubber-stamp an agent action from a phone while walking into a meeting.This is not a reason to reject mobile access. It is a reason to design for the reality of mobile behavior. Approval prompts need enough context to support a meaningful decision without burying the user in tiny-screen complexity. The system must show what Cowork is about to do, which data informed it, and what the consequence will be.
Device security also matters. If the Microsoft 365 Copilot mobile app becomes a control panel for agents that can send emails, schedule meetings, create documents, and activate plugins, then mobile device management is no longer just about protecting access to content. It is about protecting access to delegated action.
That brings Cowork squarely into the world WindowsForum readers know well: conditional access, app protection policies, identity governance, audit logs, data loss prevention, and least-privilege design. The agent may be new, but the administrative instincts are old. Trust the user, verify the device, constrain the app, log the action.
Microsoft has an advantage because it can connect Cowork to the existing Microsoft 365 security stack. But integration is not the same as configuration. Many organizations will discover that their AI readiness depends on whether their identity, data classification, and information governance practices were already in decent shape.
The Feature Set Reveals Microsoft’s Bigger AI Roadmap
Cowork’s new capabilities line up neatly with Microsoft’s broader agent strategy. Mobile access handles ubiquity. Skills handle repeatability. Plugins handle extensibility. Work IQ handles context. Frontier handles risk management and customer feedback before broader rollout.Taken together, these pieces suggest that Microsoft sees the future of Copilot less as a single assistant and more as an ecosystem of supervised agents. Some will be personal. Some will be departmental. Some will be embedded in business applications. Some will be built by developers, while others will be assembled from natural-language skills and approved plugins.
This is why the Cowork launch should not be judged solely by whether it can perfectly execute a demo task today. Early agents will be uneven. They will fail on ambiguity, expose process gaps, and occasionally make users wonder whether it would have been faster to do the work manually. The strategic question is whether the scaffolding is forming around them.
Microsoft appears to be building that scaffolding with unusual speed. Agent 365, Work IQ, Copilot Studio, Microsoft Graph, MCP-oriented tooling, plugins, and Frontier previews all point toward an enterprise AI platform in which agents are managed assets rather than novelty bots. Cowork is one of the more visible expressions of that shift because it has a simple human metaphor: delegate work to it and watch what happens.
That metaphor is powerful, perhaps too powerful. A coworker has judgment, accountability, institutional memory, and social awareness. Cowork has models, context, tools, and approval prompts. The gap between those two realities is where both the opportunity and the disappointment will live.
Enterprise Buyers Should Watch the Boring Details
For CIOs and administrators, the right reaction to Cowork is neither hype nor dismissal. The product is clearly early, but the direction is clear enough to merit serious testing. The organizations that learn how to govern agents during the preview phase will be better positioned when Microsoft inevitably pushes these capabilities toward mainstream licensing and broader availability.The first evaluation should be narrow. Pick workflows that are repetitive, document-heavy, and reviewable. Weekly summaries, meeting preparation, internal research packets, customer-account briefings, and draft communications are better candidates than high-stakes record updates or anything involving regulated decisions.
The second evaluation should be organizational, not just technical. Cowork will expose which processes are well understood and which are tribal knowledge wrapped in calendar invites. If the agent cannot follow a workflow, the problem may be the model. It may also be that the workflow was never actually defined.
The third evaluation should focus on evidence. Can administrators see what Cowork accessed? Can users understand why it made a recommendation? Can approvals be audited? Can plugins be limited by role, group, or sensitivity? Can skills be reviewed, versioned, retired, or promoted from personal experiments to sanctioned procedures?
Those questions are not anti-AI. They are what separates a pilot from a platform.
The Windows Angle Is Not Windows, It Is the Managed Endpoint
At first glance, Copilot Cowork looks like a Microsoft 365 story rather than a Windows story. It runs in the cloud, appears in Copilot experiences, and now reaches iOS and Android. But for WindowsForum’s audience, the underlying issue is deeply familiar: the endpoint is where user intent, identity, policy, and work collide.The PC will remain where much knowledge work is created, reviewed, and finished. The phone will increasingly be where agent work is supervised. The browser and desktop app will be where complex tasks are shaped. The administrative plane will be where IT decides which identities, devices, data classes, plugins, and skills are allowed to interact.
Windows itself may not be the headline, but Microsoft’s traditional model of managed computing is all over this launch. Cowork assumes a world in which organizations can define access, enforce policy, and trust Microsoft’s cloud to mediate action across applications. That is the enterprise bargain Microsoft has been selling for decades, now applied to AI labor.
The more agents can do, the more endpoint posture matters. A compromised account is bad. A compromised account with a trusted agent that can act across mail, files, calendars, Teams, and third-party systems is worse. The security model must evolve from protecting data access to protecting delegated agency.
That is a subtle but profound shift. The next generation of IT incidents may not begin with a user downloading the wrong attachment. They may begin with a user approving the wrong agent action.
The Cowork Preview Gives IT a Narrow Window to Get Serious
The most concrete lesson from this launch is that Microsoft is no longer treating agents as a distant research concept. Cowork may be in Frontier, but its shape is already recognizable enough for planning.- Copilot Cowork is now available through the Microsoft 365 Copilot mobile app on iOS and Android for organizations in the Frontier preview program.
- Cowork Skills let users and teams reuse instructions for recurring workflows instead of rebuilding complex prompts every time.
- Plugin support starts moving Cowork beyond Microsoft 365 by connecting it to third-party business systems and specialized data providers.
- Work IQ is the strategic layer that grounds Cowork in organizational context, permissions, files, messages, meetings, and tools.
- Human approval remains central because Microsoft is asking users to supervise consequential actions, not merely consume generated text.
- IT teams should pilot Cowork against narrow, reviewable workflows before allowing agents near sensitive systems of record.
The open question is whether Microsoft can make that interface dependable enough for ordinary business. If it can, Copilot Cowork will look less like a feature inside Microsoft 365 and more like an early draft of the managed AI workforce Microsoft wants every tenant to adopt. If it cannot, Cowork will become another impressive preview that teaches enterprises where the boundary between assistance and autonomy really belongs. Either way, the work of preparing for agentic computing has already started, and the organizations that treat this as an IT governance problem rather than a productivity novelty will have the advantage when the preview label finally comes off.
Source: Thurrott.com Microsoft's Copilot Cowork Agent Launches on Mobile and Adds Plugins Support