• Thread Author
Microsoft’s new Access Review Agent for Entra ID promises to turn one of the most tedious and error-prone identity-governance chores into a guided, AI-assisted workflow inside Microsoft Teams — but the convenience comes with clear prerequisites, operational trade-offs, and governance responsibilities that IT teams must plan for carefully. (petri.com) (learn.microsoft.com)

Background​

Access reviews are a core control in Microsoft Entra ID (formerly Azure AD): scheduled recertifications that ask managers, owners, and designated reviewers to confirm whether users should retain access to groups, applications, and other resources. Historically, these reviews have been time-consuming because reviewers often lack context — they must manually correlate sign-in history, group membership patterns, and HR status to make decisions. The Access Review Agent addresses that gap by surfacing recommendations and natural-language justifications directly in Microsoft Teams, backed by Microsoft Security Copilot. (learn.microsoft.com)
Microsoft has rolled the Access Review Agent out to public preview, integrating it as an Entra agent that runs on a schedule (or manually) to analyze reviews, produce approve/deny recommendations via a deterministic scoring mechanism, and provide LLM-generated explanation summaries to reviewers who finish the process in Teams. The agent’s output is auditable, and admins can view logs and metrics for metrics such as total decisions analyzed and Security Compute Units (SCUs) consumed. (learn.microsoft.com)
Context matters: this feature is one of several “agent” experiences that Microsoft is pushing across its security and productivity stack to automate high-volume tasks through Copilot and Copilot agents — a broader trend with long-term implications for identity governance and security operations. Independent coverage of Microsoft’s agent strategy helps frame the Access Review Agent as part of an agentic automation wave rather than an isolated feature. (theverge.com)

How the Access Review Agent works — a practical overview​

The Access Review Agent operates as an Entra agent that:
  • Scans for access reviews flagged for agent processing in your tenant.
  • Gathers contextual signals — such as user sign-in activity, group memberships, and employment status — to score each access decision using a deterministic scoring mechanism.
  • Generates an approve/deny recommendation and a short justification summary (LLM-assisted), then saves these recommendations to be surfaced to reviewers in Microsoft Teams.
  • Exposes a natural-language chat experience in Teams where reviewers can view recommendations, ask follow-up questions, and commit final decisions under their own identity. (learn.microsoft.com)
Key operational attributes:
  • Trigger model: The agent runs automatically every 24 hours from the time it is configured, or it can be triggered manually. There is a short initial processing period after starting the agent. (learn.microsoft.com)
  • Identity model: The agent runs using the identity of the administrator who first activated it to gather insights and store recommendations; final decisions are recorded under the reviewer’s identity in Teams. (learn.microsoft.com)
  • Supported review types: At preview, supported scenarios include Teams + Groups, Access package assignments, and Application assignments. Azure resource roles, Entra roles, and PIM-managed groups are not supported yet. Size limits apply — the agent currently supports reviews of up to 35 decisions per review. (learn.microsoft.com)

What administrators must provision and verify before turning it on​

The Access Review Agent has concrete, non-negotiable prerequisites. Before enabling it you must:
  • License requirements: Have either Microsoft Entra ID Governance or Microsoft Entra Suite licenses assigned in the tenant. (learn.microsoft.com)
  • Security Copilot onboarding: Be onboarded to Microsoft Security Copilot with at least one Security Compute Unit (SCU) provisioned. Microsoft estimates about one SCU per 20 decisions analyzed as a working average; real consumption depends on conversation length and complexity. Budget SCUs accordingly for pilot and production. (learn.microsoft.com)
  • Roles for setup: Configure the agent using an account that has standing permissions (not one that requires activation via PIM). The minimum roles for setup and management are:
  • Identity Governance Administrator
  • Lifecycle Workflows Administrator
  • Security Copilot Contributor
    Reviewers who will interact with the agent in Teams must also be assigned the Security Copilot Contributor role so that the behind-the-scenes Security Copilot session can execute. (learn.microsoft.com)
  • Microsoft Teams availability: Ensure the Access Review Agent Teams app is allowed in Teams org-wide app policies or explicitly approved by your Teams admin. Reviewers must have access to Teams to use the conversational experience. (learn.microsoft.com)
Practical checklist before activation:
  • Confirm license assignments for pilot users.
  • Provision SCUs and confirm Security Copilot onboarding.
  • Map and document admin accounts that will start the agent; choose an appropriately privileged service account with standing permissions.
  • Approve the Teams app or configure org-wide app policy.
  • Run any tenant-level diagnostics recommended for Copilot/agent readiness and ensure audit logging and SIEM ingestion are in place. (learn.microsoft.com)

Supported scenarios and current limitations — what to expect in preview​

The Access Review Agent is intentionally scoped in preview. Notable limitations and current behavior include:
  • Supported review types: Teams + Groups, Access package assignments, Application assignments. It does not support Azure resource roles or Microsoft Entra roles yet. (learn.microsoft.com)
  • Review size limits: The agent supports reviews with up to 35 decisions (per review) in preview. Larger reviews are not supported at this time. (learn.microsoft.com)
  • Stages and review types: Only single-stage reviews are supported; multi-stage workflows are not yet available. Reviewer types supported include specific users, group owners, and managers. Self-reviews are not supported. (learn.microsoft.com)
  • Language support: At launch the agent supports English only. Plan for translation or alternative workflows for global tenants. (learn.microsoft.com)
  • Operational limits: Once started, the agent cannot be paused or stopped mid-run; it runs to completion and may take minutes to process. Also, avoid using an account that requires PIM activation to configure the agent — this can cause authentication failures. (learn.microsoft.com)
These limitations affect rollout planning: for larger entitlements programs, staged rollouts and small-scope pilots are mandatory until support for larger reviews and multi-stage flows arrives.

Security, privacy, and governance considerations​

Automating access reviews with an LLM-backed agent introduces new governance vectors on top of the classic identity governance checklist. Key considerations include:
  • Identity provenance and accountability: The agent runs under the activating admin’s identity to gather insights and save recommendations, while final decisions are recorded under the reviewer’s identity. This separation reduces reviewer impersonation risk but means that recommendations are tied to the original activating admin’s permissions — choose that activating account carefully and document its use. (learn.microsoft.com)
  • Data handling and model access: The Teams chat surface opens a Security Copilot session behind the scenes. Any conversational context and data used to generate justifications traverse that Security Copilot pipeline. Ensure that your Security Copilot privacy and data-security posture aligns with internal policies and regulatory obligations. Microsoft provides guidance on data handling with Security Copilot that must be reviewed before adoption. (learn.microsoft.com)
  • Cost and capacity: SCU consumption is both a cost and capacity control. Microsoft’s estimate of ~1 SCU per 20 decisions is a planning figure; real usage will vary by conversation depth and number of follow-ups. Monitor SCU usage via the agent logs and metrics and plan guardrails to avoid unexpected overage or throttling. (learn.microsoft.com)
  • Auditability and telemetry: The agent surfaces logs and metrics (total decisions analyzed, SCUs used, reviewers engaged). Forward agent telemetry to your SIEM/XDR to ensure decisions and agent actions are captured for audit and incident response. Treat agent telemetry as a first-class observability source. (learn.microsoft.com)
  • Least-privilege and role hardening: Because the agent uses Entra identities and permissions, enforce least-privilege, short-lived privileges where possible, and keep the activating admin account tightly controlled. Avoid making the activating account a general-purpose Global Admin. Consider dedicated, documented operator accounts for agent activation. (learn.microsoft.com)
These agent-specific governance recommendations mirror the broader guidance for Copilot/agent adoption: define ownership, maintain an agent registry, and apply lifecycle policies so agent instances and their owner metadata are auditable. Community discussion emphasizes the need for a center-of-excellence model, telemetry ingestion, and staged canary rollouts when enabling agentic automation.

Deployment best practices — a step-by-step rollout playbook​

  • Pilot design (2–4 weeks)
  • Pick a small group of non-production reviews (<= 35 decisions) and a narrow set of reviewers (e.g., HR, IT app owners).
  • Assign required licenses and at least one SCU to a pilot tenant.
  • Approve the Teams app in a test tenant and provision Security Copilot Contributor roles to reviewers.
  • Use a dedicated admin account with standing permissions to start the agent and record that account’s metadata in your agent registry. (learn.microsoft.com)
  • Canary deployment (2–6 weeks)
  • Expand to a single business unit with a known volume of access reviews.
  • Monitor agent logs, SCU consumption, and reviewer feedback via the Teams experience.
  • Validate that recommendations align with existing My Access portal recommendations and track divergence rates. Use the agent logs and metrics to quantify accuracy and false positives/negatives. (learn.microsoft.com)
  • Governance hardening (ongoing)
  • Enforce RBAC and conditional access for the activating admin and reviewers.
  • Ingest agent telemetry into SIEM/XDR and set alerts for anomalous agent activity.
  • Create runbooks for revoking Security Copilot Contributor roles and disabling agent processing for specific reviews if needed. (learn.microsoft.com)
  • Enterprise rollout
  • Gradually enable agent processing for broader review scopes, keeping an eye on SCU budgets and Teams app adoption.
  • Implement periodic audits comparing agent recommendations against manual decisions to measure drift and recalibrate scoring thresholds or signals where necessary. (learn.microsoft.com)

Practical tips for reviewers and identity teams​

  • Expect the Teams chat experience to summarize the agent’s reasoning and give an explicit recommendation; treat that recommendation as a decision support tool, not an automatic approver. Final accountability still rests with the human reviewer. (learn.microsoft.com)
  • If a reviewer opens the Teams agent before it has processed the review, the agent will direct them to the My Access portal; plan communications to staff to avoid confusion during the agent’s initial processing window. (learn.microsoft.com)
  • For multi-geo tenants or non-English reviewers, maintain fall-back processes for access reviews until language support expands beyond English. (learn.microsoft.com)
  • Use the agent’s logs and metrics to produce quarterly recertification KPIs: recommendation acceptance rate, time-to-complete reviews, and SCU consumption per decision. These metrics help quantify ROI and tune agent usage. (learn.microsoft.com)

Strengths: where the Access Review Agent delivers immediate value​

  • Time savings: Automating insights reduces the time reviewers spend hunting for sign-in patterns, membership context, and HR status — potentially reducing review lead times substantially. Petri and Microsoft both highlight improved reviewer speed and decision confidence. (petri.com)
  • Consistency: A deterministic scoring mechanism helps make recommendations less subjective and therefore more consistent across reviewers than disparate manual processes. (learn.microsoft.com)
  • Integration into existing workflows: Surfacing the experience in Microsoft Teams — where many reviewers already work — lowers friction and increases completion rates. (learn.microsoft.com)
  • Audit trail: Recommendations and justifications are retained and viewable; combined with Entra logs, this supports compliance evidence needs. (learn.microsoft.com)

Risks and practical constraints — what to watch for​

  • Over-reliance and automation bias: Reviewers may accept agent recommendations too readily. Enforce human-in-the-loop policies and periodic audits to catch drift. (learn.microsoft.com)
  • Data and privacy exposure: Because the Teams interaction uses Security Copilot, tenant data may be surfaced into LLM-assisted conversations. Review Security Copilot data-security documentation and limit agent use for sensitive reviews until risk is fully assessed. Flag any claims about model data residency that you cannot verify and proceed cautiously. (learn.microsoft.com)
  • Operational surprises: SCU consumption is an ongoing cost; long conversational threads or higher-than-expected review volumes can increase consumption. Monitor consumption and set budget guardrails. (learn.microsoft.com)
  • Limited scenario coverage: The agent’s preview constraints (35 decision limit, unsupported role types, English-only) mean it won’t be a one-stop solution immediately for all entitlement programs. Plan hybrid flows that route unsupported reviews to the My Access portal. (learn.microsoft.com)
  • Identity binding: Recommendations are generated under the activating admin’s identity. This is a deliberate design for permissions but means one admin’s privileges effectively shape recommendations; treat that activating account as a privileged, audited operator. (learn.microsoft.com)
Community and enterprise discussion of agent governance reinforces these risks and recommends a conservative activation model, strong observability, and an agent registry to track owner, purpose, and lifecycle.

How to evaluate whether to adopt the Access Review Agent​

Use the following decision rubric to determine whether to pilot the Access Review Agent:
  • Business fit: Are your review volumes moderate (<= 35 decisions) and performed in English? Do reviewers already use Teams? If yes, likely a good pilot candidate. (learn.microsoft.com)
  • Governance readiness: Do you have Security Copilot onboarding, SCUs, and a governance plan that includes telemetry ingestion and role hardening? If not, postpone until these controls are in place. (learn.microsoft.com)
  • Cost/benefit: Model SCU consumption based on typical decision counts and conversation depth. Pilot with a known set of reviews to produce empirical SCU per decision numbers and estimate ongoing cost. (learn.microsoft.com)
  • Regulatory constraints: If your access reviews involve highly sensitive data (e.g., health records, financial controls) review data-flow and model usage policies carefully with legal/compliance before turning on the agent. (learn.microsoft.com)

Where this fits in the broader Copilot/agent story​

The Access Review Agent is an early example of embedding Copilot/agent functionality directly into administrative processes: reasoning over directory signals, producing recommendations, and maintaining a human-in-the-loop for final approval. Microsoft’s broader agent strategy — Copilot agents for security and productivity workflows — aims to scale similar patterns across incident triage, policy optimization, and other routine security tasks. That broader context matters because governance patterns that work for security agents apply equally to identity agents: agent registries, identity-first controls, and telemetry-driven lifecycle management. Independent reporting on Microsoft’s agent strategy highlights both opportunity and the need for robust operational controls. (theverge.com)

Final assessment and recommended next steps​

Microsoft’s Access Review Agent addresses a real operational pain: the time and error-proneness of manual access reviews. For organizations with mature Entra governance, Security Copilot onboarding, and Teams-first reviewer workflows, the agent can deliver immediate productivity and consistency gains. However, the preview’s functional limits, the SCU-cost model, the identity model for recommendations, and the centralized telemetry requirements mean this is not a “flip-the-switch” change — it requires a measured, governed rollout.
Recommended next steps:
  • Assemble a cross-functional pilot team (Identity, Security, Compliance, Teams admin) and run a small-scale pilot for 4–8 weeks. (learn.microsoft.com)
  • Provision SCUs and monitor SCU consumption closely; adapt pilot size as needed. (learn.microsoft.com)
  • Define agent operator accounts, maintain an agent registry, and enforce RBAC and conditional access around activating admin accounts.
  • Ingest agent logs into your SIEM/XDR and create alerts for anomalous recommendation patterns or unexpected SCU spikes. (learn.microsoft.com)
  • Maintain fallback flows through the My Access portal for unsupported reviews and non-English reviewers. (learn.microsoft.com)
The Access Review Agent is a pragmatic step toward AI-assisted identity governance: powerful, but conditional on governance maturity and operational oversight. For practical adoption, prioritize pilots that answer three questions empirically — does it reduce reviewer time, does it produce accurate and defensible recommendations, and does the SCU cost model scale for your environment? If those checkboxes are positive, this agent can become a useful augmentation to your identity governance program. (petri.com)

Conclusion
The Access Review Agent delivers a tangible productivity story: in-context, AI-assisted access reviews inside Teams. Microsoft’s documentation and independent reporting show the feature is ready for enterprise pilots but not yet a catch-all replacement for existing review workflows. Strong governance, telemetry integration, and cautious, staged deployment are essential to capture benefits while reducing risks — a disciplined approach that mirrors how enterprises should adopt every new Copilot-era capability. (learn.microsoft.com)

Source: Petri IT Knowledgebase Microsoft Entra ID Launches Access Review Agent in Preview