CoPhish: OAuth Consent Phishing via Copilot Studio

  • Thread Author
Microsoft Copilot Studio agents can be weaponized to deliver highly convincing OAuth consent phishing that results in stolen tokens and persistent account access — a technique researchers have labelled “CoPhish” that leverages legitimate Microsoft-hosted agent pages to evade traditional detection and social-engineer both users and administrators into handing over powerful OAuth tokens.

Background​

Copilot Studio is Microsoft’s low-code platform for building customizable AI agents that can be published as demo websites or embedded into customer sites. Its design intentionally exposes a friendly, familiar interface on Microsoft domains (for example, copilotstudio.microsoft.com) and includes built-in authentication hooks and automation “topics” to let agents integrate with downstream services. That legitimate flexibility is what makes the new CoPhish technique so dangerous: attackers create malicious agents or abuse compromised tenants to present a trusted UI and then wire the agent’s workflows to exfiltrate OAuth tokens after a user completes an authorization flow.
The issue is not a classic code vulnerability; it’s a design and governance gap combined with social engineering. Because the agent is hosted on Microsoft infrastructure, links and traffic often appear benign to users and to many security systems — a potent advantage for attackers seeking to trick people into clicking “Login” and consenting to permissions that grant access via the Microsoft Graph API.

What is CoPhish and how it works​

The attack surface: Copilot Studio agents and demo websites​

Copilot Studio agents can be published to a built-in demo website that uses Microsoft-owned domains. That demo URL is intended for testing and internal review, but attackers can share it widely — in email, Teams, or social channels — where recipients see a familiar Microsoft-branded page and are more likely to trust it. The agent UI often mimics other Copilot services closely enough that users may not notice subtle differences, amplifying the social-engineering effect.

The mechanics: login redirect, consent, exfiltration​

  • An attacker creates a Copilot Studio agent (either in their own tenant or in a compromised tenant) and configures its sign-in (Login) topic to redirect the user to an OAuth consent workflow.
  • The agent’s authentication settings can include arbitrary authorization URL templates and permission scopes; attackers use those fields to request Microsoft Graph scopes that provide read/write access to mail, chat, calendar, OneNote, or broader application-level permissions for administrators.
  • After the victim clicks “Login” and consents, the resulting authorization code or token is issued within a legitimate Microsoft flow, but the agent’s automation topic is configured to make an HTTP request that forwards the token to an attacker-controlled endpoint.
  • Because the exfiltration occurs from Microsoft infrastructure (the agent’s workflow runs on Microsoft’s backend), the network activity often originates from Microsoft IP ranges, making it harder to detect via traditional egress or proxy logs.
This chain enables a “zero-click” or minimal-interaction exploit model where the victim’s only observable action is clicking an apparently legitimate Login button — then the attacker gains tokens that can be used to impersonate the user or persist inside the tenant until the tokens are revoked.

Why administrators are particularly at risk​

Datadog’s analysis and follow-up coverage highlight two scenarios where CoPhish is especially effective:
  • Regular users in default-configured tenants can still consent to a set of Microsoft Graph permissions that Microsoft considers acceptable under certain default policies (for example, Mail.ReadWrite, Chat.ReadWrite, Calendars.ReadWrite, Notes.ReadWrite), so attackers can harvest tokens that matter even without admin privileges.
  • Privileged roles such as Application Administrator or Cloud Application Administrator can consent to any application permissions and effectively grant tenant-wide privileges to malicious apps. If an admin is phished via an agent and consents to broad scopes, the attacker can escalate or execute wide-reaching actions.
Because administrative consent bypasses the user-consent guardrails, compromising even a single admin via CoPhish can have catastrophic consequences for an enterprise environment.

The policy context — Microsoft’s shifting consent defaults​

Microsoft has iteratively tightened Entra ID’s default application consent policies precisely to reduce consent-based attacks, but gaps remain and change management is still rolling out across tenants. In mid-2025 Microsoft began replacing older, permissive default consent configurations with more restrictive Microsoft-managed policies (message center item MC1097272), restricting user-level consent for high-impact file and site permissions and encouraging admin-consent workflows for risky scopes. Datadog noted both the July 2025 change and a further planned update in late October 2025 that would narrow the default set of allowed user-consent permissions even more.
Those changes reduce exposure for everyday users but do not fully protect tenants against CoPhish-style attacks because:
  • Administrators retain the ability to consent to broad permissions on behalf of the tenant.
  • Tenants that left default settings unmodified or organizations that rely on user application registration still present opportunities for attackers to register internal apps and lure internal users to consent.
  • The rollout of Microsoft-managed policy changes is gradual and may not reach every tenant or owner-configured environment at the same time.
For defenders, the policy evolution buys time and reduces the attack surface for unsophisticated campaigns — but it is not a silver bullet against social-engineered consent phishing wrapped in trusted UIs.

Why CoPhish bypasses many traditional defenses​

  • Trusted domain advantage: Links come from copilotstudio.microsoft.com (a Microsoft domain), so URL-based reputation checks, domain allowlists, and superficial user heuristics are less effective. Security stacks that block external or unknown domains may not catch these pages because the hosting domain is legitimate.
  • OAuth-based access bypasses MFA: Consent phishing obtains tokens through user-approved authorization flows rather than credential theft. Because the token is granted by the identity provider after a consent decision, attackers can often bypass second factors and session-hardening controls that only trigger during interactive logins. This is a documented risk in consent-phishing and illicit consent grant campaigns.
  • Stealthy exfiltration via platform backends: When an agent’s workflow sends the token from Microsoft’s servers, the outbound request originates from Microsoft address space — filtering or monitoring egress traffic for suspicious destinations may miss it, and endpoint logs on the victim device won’t show the token exfiltration.
  • Low-code automation abuse: Copilot Studio topics are intentionally powerful to make agent integrations easy; those same automation primitives can be repurposed to perform token forwarding and other stealthy actions once a user grants consent.
These characteristics make CoPhish a powerful blend of platform abuse and consent phishing that requires defenders to treat AI agent surfaces as first-class attack vectors.

What Microsoft has said and planned mitigations​

Microsoft acknowledged the research and told reporters it investigated the technique and planned product updates and policy hardening to reduce misuse of Copilot Studio for consent phishing. Public statements emphasize that the technique depends heavily on social engineering and that Microsoft is evaluating additional safeguards in governance and consent experiences. At the same time, Microsoft documentation for Copilot Studio explicitly labels the demo website as a testing tool — not intended for production or customer-facing scenarios — indicating that the platform’s sharing affordances were never meant to be broadly publicized.
Datadog and other researchers recommended actionable mitigations administrators should apply immediately while platform-level fixes roll out:
  • Disable or restrict user application creation and registration where possible.
  • Enforce least privilege for administrator roles and reduce the number of users who can grant admin consent.
  • Configure and require an admin consent workflow, so high-risk app consent requests require an explicit review/approval step.
  • Monitor Entra ID and Microsoft 365 audit logs for “Consent to application” events and for Copilot Studio-specific operations such as agent creation and topic updates.
  • Treat Copilot Studio demo URLs as untrusted when received in email or chats and verify developer/owner identity before interacting.
Microsoft’s forthcoming product changes and the tightened default consent policy reduce the attack window for many tenants, but defenders must apply operational controls to close the remaining gaps.

Practical guidance for security teams (detection and hardening playbook)​

Quick triage checklist (immediate steps)​

  • Revoke suspicious refresh tokens and session tokens when you detect unusual consents or rapid application registrations.
  • Restrict the set of users who can register applications (set “Users can register applications” to No unless your business requires it).
  • Audit recent Entra ID “Consent to application” events and look for unusual application names, creation IPs, or repeated consent grants coming from Microsoft services.

Detection rules to add now​

  • Monitor audit logs for Copilot Studio agent creation (“BotCreate” workload events) and for updates to system topics (BotComponentUpdate) that mention sign-in topics or authentication settings.
  • Alert on new OAuth app registrations that request unexpectedly broad Microsoft Graph scopes or that include redirect URIs to external/unexpected hosts.
  • Create SIEM correlation rules that flag “consent to application” events followed by anomalous mailbox access or service principal activity originating from non-standard locations. Datadog published example detections that can be adapted to other SIEMs.

Policy and configuration hardening​

  • Enforce a strict application consent policy for users: limit the scopes that non-admin users can consent to and require admin approval for any high-impact permissions.
  • Use conditional access policies to protect high-privilege roles, require privileged access workstations (PAWs) for consent decisions by admins, and apply step-up authentication on risky workflow approvals.
  • Implement an admin consent workflow so that users can request approval rather than freely consenting to new apps. This process reduces click-through risk in social-engineering scenarios.

User controls and education​

  • Treat any Copilot Studio demo or “agent” link that arrives unsolicited as potentially malicious. Train users to verify the developer or the originating tenant and to report suspect consent prompts to the security team rather than clicking Accept.
  • Include OAuth consent examples in phishing simulations so people recognize permission lists and scope names. Teach users that clicking “Accept” grants programmatic privileges that are not the same as signing into a known service.

Incident response for a suspected CoPhish compromise​

If you suspect tokens were stolen via a CoPhish vector, follow these prioritized steps:
  • Identify the impacted accounts and list all recent “Consent to application” events and app registrations tied to the timeframe.
  • Revoke refresh tokens and active sessions for affected users; this forces token re-issuance and can cut off attacker access. Note: token revocation may not end all persistence if attacker also obtained application credentials or service principals — investigate thoroughly.
  • Disable or delete the malicious application/service principal from the tenant and ensure any owner accounts are investigated for compromise.
  • Audit mailbox and OneDrive access to determine data exfiltration scope; preserve logs for forensic analysis.
  • Rotate credentials for any service principals or automation accounts modified or created by the attacker.
  • After containment, implement consent policy hardening and additional monitoring to prevent repeat exploitation.
These steps combine Microsoft-recommended actions and pragmatic containment playbooks championed by incident-response teams confronting consent-phishing incidents.

Strategic implications: AI platforms as new attack surfaces​

CoPhish is a case study in how AI tooling and low-code automation broaden the attack surface. The key lessons for risk managers and CISOs are:
  • Platform trust is fragile: security assumptions that a vendor-controlled domain equals safety are no longer sufficient. Attackers will exploit legitimate hosting and branded domains to social-engineer victims.
  • Low-code and automation primitives require governance: organizations must treat agent creation, topic editing, and demo publishing as administrative activities that require approval, monitoring, and separation of duties.
  • Identity-first defenses must be layered: application consent policies, admin consent workflows, conditional access, and privileged access management should work together; no single control will fully stop consent-phishing attacks.
  • Vendor collaboration is necessary: platform-level mitigations (for example limiting redirect templates in publicly-hosted demo pages, adding consent UI hardening, or blocking outbound webhooks from demo environments) complement tenant-side controls and must be pursued in partnership with vendors. Datadog and other researchers pushed Microsoft to harden governance; vendor fixes reduce risk, but operational controls remain essential.
AI agents will continue to accelerate productivity, but their convenience must be balanced with stricter governance and identity-aware protections.

Recommended prioritized roadmap for defenders​

  • Immediate (0–7 days)
  • Disable user app registrations tenant-wide unless necessary.
  • Turn on enhanced logging for Entra ID and Microsoft 365 audit events.
  • Block or restrict demo-site distribution channels for Copilot Studio agents via policy and internal communication.
  • Short term (1–4 weeks)
  • Implement admin consent workflow and reduce the number of users who can grant admin consent.
  • Add SIEM detections for Copilot Studio agent creation and sign-in topic edits.
  • Run targeted phishing simulations focused on OAuth consent education.
  • Medium term (1–3 months)
  • Enforce conditional access for admin roles, deploy PAWs for sensitive tasks, and require step-up MFA for consent approvals.
  • Review and remediate over-privileged service principals and apps; adopt least privilege for app registrations.
  • Work with Microsoft support or account teams to understand tenant rollout schedules and to request vendor-side mitigations where applicable.
  • Long term (3–12 months)
  • Build governance around low-code and AI tooling: approval gates, developer identity verification, and marketplace vetting for agents.
  • Integrate OAuth consent monitoring into regular security posture reviews and tabletop exercises.
  • Consider third-party Entra ID posture tools and risk scoring systems to provide continuous assessment of app consent exposure.

Risks and open questions​

  • Detection workarounds: defenders must assume attackers will iterate. If Microsoft mitigates demo-site abuse, adversaries may switch to embedding agents in compromised customer pages or cloning UI patterns elsewhere.
  • Persisting tokens and refresh lifetimes: the lifecycle of refresh tokens and how long they provide undetected access remains a key variable for impact assessments; tenants should assume refresh tokens can persist and act accordingly by revocation and rotation.
  • Incomplete rollouts: not every tenant has the same default policies or administration discipline; uneven adoption of Microsoft’s consent defaults means some organizations will remain exposed longer than others.
  • Human factors: because the attack relies on consent clicks, user education and interface design changes (e.g., clearer consent UIs, friction on high-risk consents) will be necessary complements to technical controls.
These risks emphasize the mixed nature of the threat: partly technical, partly organizational, and largely social.

Conclusion​

CoPhish is an urgent reminder that as AI tooling and low-code automation become embedded in enterprise workflows, attackers will treat those very conveniences as new vectors for classic identity attacks. The blend of trusted hosting, configurable authentication flows, and automation topics that can forward tokens creates a high-payoff phishing scenario that can bypass multi-factor protections and live on until tokens are revoked.
The immediate path forward is clear: organizations must tighten Entra ID consent governance, restrict who can register and consent to applications, instrument Copilot Studio agent lifecycle events with monitoring, and educate users to treat unsolicited Microsoft agent links as untrusted. Vendor hardening from Microsoft will reduce the attack surface, but durable protection requires tenants to adopt identity-first controls, stronger admin workflows, and continuous detection tuned for consent-based abuse.
Security teams that treat AI-driven agent platforms as first-class assets — with policies, monitoring, and a clear incident playbook — will be best positioned to blunt CoPhish-style campaigns and the next generation of consent-phishing attacks.

Source: WebProNews New CoPhish Attack Hijacks Microsoft Copilot to Steal OAuth Tokens
 
A newly documented phishing technique named CoPhish weaponizes Microsoft’s Copilot Studio to harvest OAuth tokens — a shift in attacker tactics that transforms trusted, Microsoft-hosted agent demo pages into convincing consent lures capable of silently exfiltrating bearer tokens and enabling account takeover or broad Microsoft Graph access.

Background and overview​

Microsoft Copilot Studio is a low-code platform that lets organizations and individuals build, publish, and share customizable AI agents (often called “agents” or “bots”) hosted on Microsoft domains such as copilotstudio.microsoft.com. The platform exposes friendly, first‑party looking demo pages and built‑in authentication hooks so agents can integrate with downstream services — features that attackers now exploit as trusted distribution and automation channels.
Researchers at Datadog Security Labs disclosed the CoPhish proof‑of‑concept and technical write‑up, showing how an attacker-controlled agent can present a legitimate-looking sign‑in experience, redirect the user to an OAuth consent flow, capture the resulting authorization artifacts, and use Copilot Studio’s automation topics to forward access tokens to attacker infrastructure — often from Microsoft infrastructure itself, which complicates detection.
This development sits at the intersection of two long-standing attack classes: OAuth consent phishing (MITRE technique T1528) and adversary‑in‑the‑middle/device‑code social engineering; CoPhish amplifies their effectiveness by leaning on trusted hosting and low‑code automation inside the Copilot environment. Independent outlets and security blogs quickly corroborated Datadog’s findings and reported that Microsoft has acknowledged the issue and plans product updates.

Why this matters: OAuth tokens are keys, not passwords​

OAuth access tokens are bearer tokens that allow callers to act on behalf of the user when calling Microsoft Graph and other APIs. A stolen token effectively functions as a key — it can give attackers programmatic access to email, calendar, files, chat, and other resources without exposing or resetting a user’s password. The CoPhish flow demonstrated tokens with scopes such as Mail.ReadWrite, Mail.Send, Chat.ReadWrite and Notes.ReadWrite.
Key risk amplifiers:
  • The lure is hosted on a Microsoft domain (copilotstudio.microsoft.com), creating visual trust that significantly reduces users’ suspicion.
  • Copilot Studio agents can run automation topics server‑side, allowing tokens to be forwarded from Microsoft infrastructure rather than the victim’s browser, obscuring exfiltration from endpoint network logs.
  • Privileged roles in Entra ID (for example, Application Administrator or Cloud Application Administrator) can consent to powerful permissions on behalf of a tenant — if such a user is tricked, the attacker can request far broader scopes and application permissions.
Because tokens can be long‑lived and programmatically used, an attacker who obtains them can escalate, persist, and pivot — sending phishing from the victim’s mailbox, searching Graph for high value artifacts, or provisioning service principals and application permissions depending on the scopes granted.

Mechanics of CoPhish: a step-by-step breakdown​

  • Attacker provisions or compromises a Copilot Studio agent, either in their own tenant (trial or licensed) or in a tenant they’ve already gained access to.
  • The agent’s UI is published to the built‑in demo website (copilotstudio.microsoft.com/…), producing a Microsoft‑hosted URL the attacker can distribute via email, Teams or social channels.
  • The agent’s Login (sign‑in) topic — a configurable automation that normally helps integrate services — is backdoored to redirect to an OAuth authorization URL for an attacker‑controlled app and to perform an HTTP request that forwards the captured User.AccessToken to an external collector immediately after consent. That HTTP request runs from Microsoft’s agent runtime, so outbound traffic appears to originate from Microsoft IP ranges.
  • The victim clicks “Login”, completes the OAuth consent, and receives the typical Copilot sign‑in user interaction (including the token.botframework.com validation step). Unbeknownst to the victim, the agent automation forwards the token to the attacker.
  • The attacker now possesses a bearer token with the Graph scopes the user consented to; they use it to act via Graph APIs, send malicious mail, exfiltrate data, or further the compromise.
This chain is particularly insidious because the only user action required is clicking a Microsoft‑hosted “Login” button — the trust anchor shifts from “recognize a fake domain” to “assume Microsoft is legitimate.”

What Datadog and independent reporting found​

Datadog’s write‑up includes a reproducible proof‑of‑concept that demonstrates token capture via a backdoored Login topic and exfiltration to Burp Collaborator. Their analysis details both the automation configuration and the reply/redirect URL patterns that enable token capture via the Bot Framework validation steps used by Copilot.
Independent security outlets — from TechRadar to specialist blogs — replicated the high‑level narrative and quoted Microsoft’s statement that the vendor is investigating and planning product updates to harden governance and consent experiences. These outlets also underscore the real operational risks if administrators or users in default‑configured tenants are tricked.
One critical caveat both Datadog and follow‑up reporting emphasize: while the PoC proves feasibility, public telemetry demonstrating mass, in‑the‑wild compromise tied specifically to Copilot Studio abuse has not been published; claims of widespread tenant compromise should be viewed as plausible but unquantified until telemetry is released.

The policy context: what Microsoft changed (and what remains)​

Microsoft has progressively tightened Entra ID application consent policy defaults to reduce user‑consent risk. In 2020 Microsoft first restricted user consent for unverified external applications; in mid‑2025 the company rolled out a Microsoft‑managed default (referred to in internal notes as microsoft-user-default-recommended, announced via message center MC1097272) that narrowed the set of Graph scopes ordinary users may consent to by default. Datadog’s analysis documents the July 2025 update and a further Microsoft announcement to narrow defaults in late October 2025.
Under the July 2025 defaults, the set of scopes that non‑admin members could self‑consent to was restricted, but still allowed certain mailbox, chat, calendar, and OneNote scopes (for example Mail.ReadWrite, Chat.ReadWrite, Calendars.ReadWrite, Notes.ReadWrite). A later Microsoft change scheduled for late October 2025 was designed to further limit the allowed scopes — reportedly leaving only OneNote access (Notes.ReadWrite) in the default user‑consent set — but administrative consent and privileged roles remained outside the scope of those default limitations.
Practical implications:
  • The Microsoft‑managed default reduces exposure for the average tenant user, but it does not eliminate consent‑based attacks — particularly when privileged roles or tenant settings permit broader consent.
  • Policy rollouts are gradual; tenants may not receive or adopt new defaults simultaneously. Admins must not rely solely on vendor defaults; tenant‑level governance remains essential.

Prior vulnerabilities and CVE context​

CoPhish sits on a broader backdrop of Copilot Studio security issues. Notably, a critical Cross‑Site Scripting vulnerability (CVE‑2024‑49038) affecting Copilot Studio was publicly documented in November 2024 and remediated by Microsoft; it highlighted origin validation and manifest hygiene shortcomings that were addressed as part of product hardening. Public vulnerability databases and Microsoft Security Response Center advisories document the CVE and subsequent mitigations.
That history matters because it shows two things: cloud‑hosted, low‑code platforms are attractive targets, and even fixed vulnerabilities can leave residual governance and UX risks that social engineering exploits later. The existence of CVE‑2024‑49038 reinforces why organizations need telemetry and policy controls layered above vendor protections.

Detection challenges: why CoPhish is stealthy​

  • Server‑side exfiltration: Copilot agent automations can POST tokens directly from Microsoft infrastructure, so network egress observed from the victim’s device may look normal and not show connections to attacker hosts. This substantially degrades egress‑based detection.
  • Legitimate UX: The consent flows and token exchanges use Microsoft endpoints (token.botframework.com, sts.windows.net, etc.), which look legitimate in logs and on the UI — defeating naive heuristics that flag unknown domains or TLS anomalies.
  • Privileged consent: If an admin grants consent, the attacker can gain application permissions that enable tenant‑wide actions, and those elevated consents are part of normal administrative workflows, making detection harder without identity‑centric correlation.
The most reliable detection telemetry is identity and application consent logs in Entra ID (audit logs, service principal creation, consent events) and Copilot administrative logs showing agent creations, topic edits, and demo URL generation. Correlating these two data planes is essential.

Recommended mitigations: immediate, short‑term, and long‑term​

Security teams should treat CoPhish as a high‑priority identity/consent threat and adopt layered countermeasures across governance, authentication, telemetry, and user hardening.
Immediate (hours — 48 hours)
  • Restrict who can consent to applications: limit Cloud Application Administrator / Application Administrator roles to a small, vetted set and require out‑of‑band approvals for any new consents.
  • Apply Microsoft‑managed consent defaults or a stricter custom policy that blocks user consent for high‑risk Graph scopes (Mail.ReadWrite, Files.Read.All, Sites.Read.All). Validate that the tenant received the July 2025/October 2025 policy changes.
  • Disable user app creation / restrict who can register apps in the tenant if your organization does not need it.
  • Enforce phishing‑resistant MFA (FIDO2 or platform passkeys) for all privileged roles; this materially reduces AiTM/device‑code social engineering success.
Short term (days — 2 weeks)
  • Audit recently consented apps and service principals; revoke suspicious consents and re‑evaluate scopes.
  • Add SIEM detections to correlate Entra ID consent events, new service principal creations, and Copilot agent creation/modification events. Alert on post‑consent Graph calls originating soon after agent/demo URL activity.
  • Block or monitor Copilot Studio demo URLs in high‑risk workflows; treat public copilotstudio.microsoft.com links shared in emails to admins as suspicious until validated.
Long term (weeks — months)
  • Implement a formal OAuth/consent governance program with approvals, least‑privilege scoping, periodic audits, and justification requirements for any app requesting sensitive Graph permissions.
  • Integrate Copilot agent governance into IAM and DLP: ensure Copilot connectors, agent permissions, and topics are visible to Purview/DLP policies and subject to approval.
  • Conduct tabletop exercises and red‑team simulations that include consent‑phishing scenarios to validate detection and response playbooks.
Incident response playbook (concise)
  • Revoke tokens and refresh tokens for affected accounts (revokeSignInSessions) and force re‑authentication.
  • Remove the offending application’s consent and delete suspicious service principals. Rotate secrets as required.
  • Hunt Graph activity for actions taken under the compromised identity (sent mail, calendar invites, file accesses) and contain lateral spread.

Practical, prioritized checklist for Windows admins and security teams​

  • Immediately identify who can grant app consent in your tenant and reduce that group to the least possible size.
  • Confirm your tenant has the latest Microsoft‑managed consent policies applied; review message center items and apply stricter custom policies if warranted.
  • Enforce phishing‑resistant MFA for all admins and any role that can grant consent.
  • Add detection rules: alert on (a) new Copilot agent demo URLs; (b) new service principal creation and consent events; (c) Graph activity from recently consented apps.
  • Educate privileged users: require out‑of‑band verification for any unusual consent requests, and treat Microsoft‑hosted demo URLs as suspicious in high‑value workflows until validated.

Critical analysis: strengths of CoPhish, vendor responsibility, and residual risks​

Strengths of the attacker model
  • Low cost, high yield: building or compromising a Copilot agent, registering an app, and crafting a convincing consent flow are inexpensive relative to the impact. Datadog’s PoC demonstrates feasibility without zero‑day exploits.
  • Trusted hosting amplifies social engineering: a copilotstudio.microsoft.com URL and a familiar Copilot UI change the calculus for victims and make traditional domain‑and‑TLS‑based heuristics ineffective.
Vendor responsibility and actions
  • Microsoft has acknowledged the issue and is iterating product and policy updates (including the July 2025 consent defaults and further changes slated for late October 2025) to reduce the attack surface. Such vendor‑level controls are powerful because they can be applied at scale across tenants.
  • However, platform hardening alone cannot substitute for tenant governance: privileged roles retain consent power by design, and some tenant‑specific policies or operational needs may keep user consent enabled in practice.
Residual risks and open questions
  • Privileged account risk remains the top residual: if an administrator is successfully phished, the attacker can request far broader scopes than a typical user, enabling tenant‑wide compromise.
  • Telemetry gaps: public reporting to date shows PoCs and vendor fixes, but quantifying how many tenants were exploited in the wild is difficult; early reporting has not published large‑scale confirmed exploit telemetry attributed solely to Copilot Studio agents. Treat claims of mass compromise as plausible but unverified until Microsoft or third parties publish clear telemetry.

What enterprises and defenders should do now (executive summary)​

  • Treat Copilot Studio and other low‑code agent platforms as part of the identity attack surface and incorporate them into IAM governance, DLP, and incident playbooks.
  • Immediately tighten consent and app‑registration controls, minimize admin consent capability, and require phishing‑resistant MFA for privileged roles.
  • Improve telemetry and correlation between Copilot agent events and Entra ID consent logs; prioritize SIEM and SOAR playbooks that can rapidly revoke and remediate suspected token theft.

Final assessment and cautionary note​

CoPhish is a pragmatic, socially engineered evolution of OAuth consent phishing that leverages legitimate hosting, low‑code automation, and identity workflows to make token theft both stealthier and more convincing. Datadog’s research and multiple independent reports collectively verify the technical feasibility and operational risk; Microsoft has acknowledged and started to address the root problems through platform and policy changes.
However, vendor fixes and managed defaults are not a panacea. The enduring attack surface remains administrative consent, tenant configuration drift, and human trust. Organizations must adopt an identity‑first defense posture: minimize consent‑granting privileges, enforce phishing‑resistant authentication for high‑risk roles, correlate identity and Copilot telemetry, and rehearse rapid token‑revocation and containment procedures. These operational controls — combined with continued product hardening from vendors — represent the most realistic, durable defense against CoPhish and similar consent‑phishing threats.
Any claim that CoPhish has already led to broad, confirmed production compromises should be treated with caution until authoritative telemetry is published; the technique is demonstrably practical, but the scale of real‑world exploitation remains unquantified in public reporting.

CoPhish is a timely reminder that as enterprise IT adopts agentic AI and low‑code platforms, the trust surface expands — and identity governance must keep pace. The most dangerous compromises are those that look and feel legitimate; stopping them requires policy, telemetry, and hardened human workflows working together.

Source: RS Web Solutions Fresh CoPhish Attack Targets Copilot Studio Stealing OAuth Tokens