ConsentFix: OAuth Consent Phishing Targeting Azure CLI and Microsoft Graph

  • Thread Author
Security researchers have discovered a sophisticated new phishing variant — dubbed ConsentFix — that weaponizes trusted Microsoft OAuth flows and the Azure Command-Line Interface (Azure CLI) to take over Microsoft accounts without passwords, without directly bypassing multi-factor authentication (MFA), and often without creating obvious audit noise.

Illustration of a Microsoft sign-in page with an auth code, copy button, and security icons.Background​

ConsentFix is the latest evolution in a family of attacks that abuse OAuth consent mechanics and trusted first‑party applications to obtain bearer tokens (access/authorization codes) that grant programmatic access to Microsoft Graph and other tenant resources. Researchers at Push Security described ConsentFix as a browser‑native ClickFix variant that convinces a target to paste a localhost OAuth redirect URL containing an authorization code back into a malicious page — thereby completing an OAuth flow that grants access to an attacker-controlled client running under the identity of a legitimate, first‑party Microsoft application. Independent coverage and technical writeups make two consistent points:
  • The core social engineering is subtle and minimal — victims rarely type credentials or reveal second factors; they simply follow seemingly benign prompts and copy/paste a URL.
  • The attacker benefits from targeting first‑party Microsoft applications and flows (for example, Azure CLI or other integrated Microsoft apps) that enjoy elevated trust in Entra ID tenants and can sometimes evade tenant-level consent gating.
These characteristics make ConsentFix a high‑leverage tactic: the attacker obtains bearer tokens that can be used immediately to enumerate directory data, read mail, access files, invoke Graph API calls, create service principals, or perform further privilege escalation — all without the traditional signals defenders expect from credential‑theft or brute‑force attacks.

How ConsentFix works — a step‑by‑step breakdown​

1. Lure and infection vector​

  • Attackers poison search results (search‑engine SEO poisoning) or compromise reputable websites and insert a convincing UX — commonly a fake Cloudflare Turnstile challenge — to collect visitor email addresses and filter for corporate accounts.
  • The site asks for a corporate email and, for selected targets, presents a “Sign in” button that opens a legitimate Microsoft sign‑in flow in a new browser tab. If the user’s browser session already contains an active Microsoft login, the victim can complete the sign‑in with a single click.

2. OAuth redirect to localhost​

  • The legitimate Microsoft flow redirects back to a localhost URL (for example, http://127.0.0.1:port/?code=AUTH_CODE&... containing an authorization code generated for the Azure CLI or another targeted first‑party app. This redirect is a normal part of many OAuth flows for native applications and developer tools.

3. Copy‑paste hook​

  • The phishing page instructs the victim to copy the redirect URL from the browser address bar and paste it back into the original page to “complete verification.” That single copy‑paste action hands the attacker the authorization code and completes the OAuth code‑exchange under the attacker’s control, producing access tokens for the attacker to use. The victim’s visible action looks harmless, but it bridges the final link in the OAuth grant flow.

4. Exploiting elevated trust (Azure CLI)​

  • Azure CLI is a pre‑trusted, first‑party Microsoft application ID that many tenants treat as a legitimate interactive tool. Because it’s a Microsoft‑managed app, it can request certain permissions and scopes with behaviour that differs from arbitrary third‑party apps. Push Security’s analysis and follow‑up reporting highlight that targeting this type of first‑party app reduces the effectiveness of tenant controls that otherwise limit user‑initiated consent.

5. Attacker actions after token acquisition​

  • With valid tokens, an attacker can:
  • Enumerate users and groups (directory enumeration)
  • Read or exfiltrate emails and files
  • Create or modify service principals, app registrations, or role assignments (depending on scopes)
  • Persist and pivot from cloud resources to downstream systems
    Detection is complicated because many actions are performed through normal Graph API calls and may appear as legitimate application‑originated traffic.

Why Azure CLI and first‑party apps are attractive targets​

  • Implicit trust: First‑party Microsoft apps (including the Azure CLI application GUID) are built into Entra ID’s ecosystem. Many tenants cannot outright remove or standardly block these service principals from the portal UI, and they are treated differently from external publisher apps. Microsoft documentation and administrative experience confirm that certain Microsoft‑managed applications are not removable through normal tenant UIs. This operational reality raises the bar for tenant‑side mitigation.
  • Powerful scopes: Some flows allow requests for powerful Graph scopes (including legacy/documented and undocumented internal scopes in some cases), which — once consented — give broad access. Attackers who obtain tokens with these scopes can perform high‑impact actions programmatically.
  • Consent governance gaps: Although Microsoft has tightened default user‑consent policies in 2025, tenant settings vary widely; many organizations still permit user consent for certain delegated scopes or leave admin consent options enabled for privileged roles. Attackers target these gaps and high‑value users (admins or developers) to maximize impact.
  • Low endpoint footprint: Because the attack happens within the browser and uses legitimate Microsoft back‑end flows, endpoint detection (EDR) and traditional network egress monitoring may not observe token exfiltration or flag it as malicious. This invisibility makes Cloud and identity telemetry (Entra/Audit/Graph logs) the primary detection surface.

Technical validation and independent corroboration​

Key claims in the original disclosure have been independently corroborated across vendor reports and platform documentation:
  • Push Security published the primary research describing the ConsentFix flow and the specific targeting of the Azure CLI flow. Their write‑up details the copy/paste localhost redirect and notes the practical inability of tenant administrators to fully block certain Microsoft first‑party service principals.
  • Mainstream reporting summarized the same attack vector and stressed the minimal user interaction required (copy/paste) and the risk of abusing first‑party trust.
  • Microsoft documentation on user and admin consent, and enterprise applications, documents the distinctions between user‑consent and admin‑consent flows and warns that some Microsoft‑managed service principals may not be removable through standard portal controls — lending technical context to why they are attractive to attackers. This supports Push Security’s observations about operational constraints on tenant administrators.
Where reporting makes definitive operational claims (for example, “Azure CLI cannot be blocked or deleted”), those represent practical, real‑world behaviors observed by researchers and defenders rather than absolute impossibilities. Tenant operators can apply targeted controls (conditional access, app‑consent policies, RBAC restrictions, token lifetime controls), but fully removing Microsoft‑managed service principals via the portal is not always possible. Treat such absolute‑sounding claims as operational warnings and verify tenant‑specific behavior during hardening.

Detection and response: what defenders should do now​

ConsentFix is a case study in why identity telemetry and consent governance are now primary defensive controls. Practical, prioritized actions follow.

Immediate (hours to days)​

  • Monitor and alert on interactive Azure CLI sign‑ins and unusual CLI activity for users who don’t normally use CLI tooling. Flag interactive device‑code or localhost redirect flows that do not align with normal behavior.
  • Audit recent “Consent to application” events in Entra/Sign‑in logs. Look for new or unexpected enterprise applications or consent grants requesting broad Graph scopes.
  • Revoke suspicious refresh tokens and force a sign‑out for impacted users where consent or token abuse is suspected. Token revocation shortens attacker dwell time.

Short term (days to weeks)​

  • Disable or restrict user application registration and self‑consent where not required by business processes. Require admin approval for high‑impact permissions (admin consent workflow).
  • Reduce token lifetimes and enable refresh token rotation where possible; enforce least privilege on application scopes. Shorter lifetimes limit the window for stolen token reuse.
  • Apply Conditional Access policies that enforce device, location, and risk‑based checks for OAuth token issuance and for admin‑level consent flows. Treat OAuth consent events with the same suspicion as interactive sign‑ins.

Medium term (weeks to months)​

  • Harden app consent governance: maintain a whitelist of approved apps, require publisher verification, and conduct regular access reviews for app consent and service principals. Automate alerts for newly registered applications requesting high‑impact scopes.
  • Require Privileged Access Workstations (PAWs) or similar hardened devices for administrators performing consent decisions. Separate duties so a single admin click cannot silently grant tenant‑wide privileges to unvetted apps.

Forensics and hunting tips​

  • Correlate Entra/Azure AD audit logs (Consent events, App registrations, Service principal creation), Microsoft 365 audit logs (mailbox access, file downloads), and sign‑in logs to identify suspicious token usage patterns.
  • Look for Graph API spikes (bulk mailbox reads, file downloads) immediately after consent events. Create SIEM rules that pair “consent to application” events with unusual Graph activity.

Organizational and UX defenses​

Technical controls alone won’t eliminate ConsentFix. Because the attack hinges on social engineering and user interaction, organizations must complement identity hardening with policy and user education:
  • Build user training and phishing simulations that include consent‑phishing scenarios — demonstrate how OAuth consent differs from sign‑in and show red flags (unknown app names, odd redirect URIs, prompts to copy/paste local URLs).
  • Adopt an admin‑driven app catalog and a formal approval workflow for any application that requires delegated or application permissions beyond a tight minimum. Document justifications and re‑review approvals periodically.
  • Treat vendor‑hosted demo or low‑code agent URLs (for example, publicly shared Copilot Studio demo links) as untrusted inbound surfaces until the app is verified and consent is explicitly authorized. The broader “trusted domain” fallacy (assuming a known vendor domain is automatically safe) is a recurring root cause.

Strategic implications — ConsentFix in the larger threat landscape​

ConsentFix is not an isolated novelty; it’s a refinement in an ongoing attacker playbook that seeks the highest leverage with the lowest observable noise. Recent history (CoPhish-style abuses of Copilot Studio, token‑based exfiltration proofs‑of‑concept, and prior Entra ID token validation issues) shows attackers gravitating toward:
  • Trusted hosting and vendor domains to beat URL‑based protections.
  • OAuth and consent flows to obtain bearer credentials rather than passwords.
  • Low‑noise API activity and cloud automation to persist and exfiltrate without endpoint artifacts.
The net effect: defenders must shift from a perimeter and credential‑centric posture to identity‑first defenses that treat OAuth grants, tokens, and application consent as critical control points. Zero‑trust principles — assume no implicit trust for any app or flow and continuously verify authorization claims — are now operational necessities.

Strengths and limits of the public reporting​

The Push Security disclosure and subsequent coverage are strong on operational detail and detection guidance; they reproduce the minimal human‑interaction mechanics and show why first‑party app trust amplifies risk. The reporting helps defenders translate a confusing new technique into concrete, testable detections and admin actions. Caveats and open questions:
  • Scale and attribution: public reporting confirms active campaigns in the wild, but quantifying tenant impact or the total scale of exploitation is difficult without cross‑tenant telemetry. Treat public claims about global compromise counts conservatively unless tenants’ own telemetry indicates impact.
  • Platform evolution: Microsoft continues to tighten default consent settings and has introduced Secure‑by‑Default managed policies in 2025. These changes reduce attack surface but do not eliminate the threat for tenants with legacy or permissive configurations. Operators must verify their own tenant settings rather than assume vendor defaults have been applied.
  • Absolute statements about “cannot be blocked”: Push Security’s warning that targeted first‑party apps cannot be blocked or deleted via normal tenant UX captures an operational truth for many admins, but defenses (RBAC, Conditional Access, admin consent workflows) still provide meaningful mitigation even if complete app removal is not possible. Validate tenant behavior and document which Microsoft‑managed apps appear in your Enterprise Applications blade and whether they can be disabled programmatically.

A prioritized checklist for Windows and cloud administrators​

  • Enforce admin consent for high‑impact scopes and disable user app registration unless required.
  • Monitor and alert on Azure CLI interactive sign‑ins and localhost redirect code flows. Treat unusual CLI activity as high‑priority.
  • Audit “Consent to application” events and revoke suspicious grants; rotate or revoke refresh tokens for affected accounts.
  • Apply Conditional Access to require compliant device posture or PAW for consent decisions by administrators.
  • Shorten token lifetimes, enable refresh token rotation, and reduce scope exposure for delegated permissions.
  • Train users and run consent‑phishing simulations that mirror the copy/paste mechanics used by ConsentFix.

Conclusion​

ConsentFix is a clear reminder that identity is now the front line of cloud security. Attackers are shifting from stealing passwords to manipulating the convenience and implicit trust built into modern OAuth and tooling UX. The remedy isn’t a single patch — it’s a change in defensive posture: treat OAuth consent and first‑party app behavior as high‑risk controls, harden tenant consent policies, instrument identity telemetry aggressively, and couple technical controls with realistic user training. Organizations that act now — auditing consent, restricting user‑level app registration, monitoring Azure CLI and Graph activity, and enforcing least privilege for app permissions — will reduce their exposure to ConsentFix and the broader class of token‑theft attacks that follow the same pattern.
Source: eSecurity Planet https://www.esecurityplanet.com/threats/azure-cli-trust-abused-in-consentfix-account-takeovers/
 

Back
Top