EY Canvas Rolls Out Enterprise Agentic AI to Redefine Global Audits by 2028

  • Thread Author
EY’s latest Assurance move is more than a software update. It is a full-scale attempt to rewire how global audits are planned, executed and supervised, with agentic AI now being embedded into EY Canvas, the firm’s central audit platform, across a network that spans 130,000 Assurance professionals, 160,000 audit engagements, and more than 150 countries and territories. EY says the new architecture is already live after testing and piloting, and that it is intended to support end-to-end audit activity by 2028. (ey.com)
The significance is not just operational. EY is positioning the rollout as a response to the exploding volume of corporate data and the growing need to audit AI-related risks, controls and governance, while still keeping human judgment at the center of the process. That balance matters, because if auditors rely too heavily on automation, they risk introducing blind spots; if they resist automation, they risk missing the scale and complexity of modern enterprise systems. EY is trying to thread that needle in public, at enterprise scale, and with Microsoft deeply embedded in the stack. (ey.com)

A digital visualization related to the article topic.Overview​

EY’s announcement arrives at a moment when the auditing profession is being pushed in two directions at once. On one side, clients are generating more data than traditional audit workflows were ever designed to handle. On the other, those same clients are deploying AI systems that create new exposures in model governance, data quality, access controls and accountability. EY’s answer is to move from incremental digitization to a multi-agent framework that can orchestrate tasks inside the audit process itself. (ey.com)
This is also a continuation of a strategy EY has been building for years. The firm has already layered AI capabilities into Assurance, expanded guided workflows, and tied much of that work to a multiyear technology investment program. In 2023, EY said its Assurance teams were processing hundreds of billions of journal-entry lines per year, and by 2026 it says that figure has grown to more than 1.4 trillion lines of journal entry data annually. That growth tells its own story: the firm is not experimenting on the margins anymore, it is industrializing AI within a core trust service. (ey.com)
Microsoft is central to that plan. EY says the new agentic framework is built on Microsoft Azure, Microsoft Foundry and Microsoft Fabric, which suggests the system is meant to scale across data ingestion, orchestration and application layers rather than sit as a thin add-on. The alliance is not new, but the depth is notable: EY has been using Microsoft technology in Assurance for years, and the current release marks a more tightly integrated phase of that partnership. (ey.com)
The broader market context matters too. Microsoft has spent the past year aggressively promoting the idea of the Frontier Firm — organizations that blend human expertise with AI agents as a structural operating model. EY is now one of the inaugural members of Microsoft’s Frontier Firm AI Initiative with Harvard’s Digital Data Design Institute, which gives the company a visible place in the emerging enterprise AI narrative. That matters for branding, but it also matters for competitive pressure: the Big Four are now being compared not just on audit quality, but on how convincingly they use AI to augment professional judgment. (microsoft.com)

Why EY Is Doing This Now​

EY is responding to a basic structural problem in modern audit: the scale and shape of the evidence have changed faster than the workflow. A single engagement can involve massive transaction volumes, fragmented systems, cloud-native operations, and fast-moving controls around AI, cybersecurity and data governance. In that environment, manual review alone becomes a bottleneck, even for highly skilled teams. (ey.com)
The firm’s own language is revealing. EY says the deployment is designed to reduce administrative burden, improve risk assessment, and preserve human skepticism and insight. That is the right framing, because audit is not supposed to become an autonomous machine exercise. The goal is to compress low-value work, surface anomalies earlier and leave the judgment calls to professionals who can interpret context, intent and materiality. (ey.com)

From digitized audit to agentic audit​

Earlier audit-tech waves focused on analytics, dashboards, automated tie-outs and smarter search. Agentic AI goes further by orchestrating multi-step tasks, coordinating specialized components and adapting to context as the engagement unfolds. In practice, that could mean an AI layer that helps route evidence gathering, summarize guidance, prepare planning outputs and assist with issue identification before the human team signs off. (ey.com)
That shift is important because it changes the role of the technology from passive tooling to active workflow participant. The upside is obvious: less swivel-chair work, faster analysis and better consistency across engagements. The risk is equally obvious: the more a system shapes the path auditors take, the more carefully it has to be governed, tested and audited itself. That is why EY keeps returning to responsible AI language in its messaging. (ey.com)
The timing also reflects competitive reality. Other large firms are moving in similar directions, and smaller players are increasingly using Microsoft-based AI stacks to improve efficiency. EY cannot afford to look slower, especially when audit clients are themselves asking how firms are using AI in the assurance process. The race is no longer just about compliance; it is about proving modernity without sacrificing trust.
  • More data means more opportunities for automation.
  • More AI in client systems means more AI-specific assurance work.
  • More competition means audit quality is becoming a technology story.
  • More scale means the human-review bottleneck becomes more expensive.

The client-zero philosophy​

EY’s “client zero” idea is strategically smart. If the firm is going to tell clients to trust AI in mission-critical business processes, it has to show that it is willing to use the same tools on itself. That creates a stronger credibility loop than selling AI from the outside while keeping internal processes old-fashioned. (ey.com)
It also changes expectations inside the firm. When internal transformation is public, people assume measurable gains, not vague innovation theater. EY is effectively promising that its own audit organization will become the proof point for its advisory recommendations, which raises the bar for execution and accountability. If the system disappoints internally, the market will notice quickly. (ey.com)

What Changes Inside EY Canvas​

EY Canvas is not just where the new AI lives; it is the operating layer of the audit. EY says the platform processes more than 1.4 trillion lines of journal entry data per year, which indicates a huge volume of structured evidence is already flowing through its workflows. Embedding agents directly into that environment should make it easier to identify anomalies, assemble working papers and guide auditors through risk-based tasks without bouncing between disconnected systems. (ey.com)
That matters because the real value of AI in audit is not flashy text generation. It is reducing friction across repetitive steps: data triage, content search, checklist support, tie-out procedures and risk assessment support. EY has already been releasing features in those areas, including AI-enabled guided workflows and content summarization, and the 2026 rollout extends that trajectory into a more integrated multi-agent model. (ey.com)

The multi-agent model​

A multi-agent architecture suggests specialization. One agent may gather and normalize data, another may surface exceptions, another may route audit guidance, while a human professional validates the result. That is different from a single chatbot sitting in the background, because it implies a more modular and scalable design with clearer workflow boundaries. (ey.com)
It also creates a more defensible audit narrative. In assurance, the question is not whether AI can produce a useful answer; it is whether the method, evidence path and supervisory controls are reliable enough to withstand inspection. A multi-agent system can help if it logs decisions, preserves provenance and keeps humans in the loop. Without those controls, it becomes just another source of opaque automation risk. (ey.com)
  • Orchestrates complex tasks instead of isolated prompts.
  • Embeds AI into the audit workflow rather than adjacent to it.
  • Supports dynamic risk assessment across engagement stages.
  • Can reduce duplicated manual effort if controls are strong.

Scaling from pilot to production​

EY says the system followed a “sustained period of extensive and successful testing and piloting.” That phrasing matters because it signals the firm knows the reputational stakes: no one wants to bet assurance quality on a demo. Moving from pilot to production in a regulated profession requires not only technical readiness but also methodology changes, training and sign-off discipline. (ey.com)
The firm says the technology is expected to support all end-to-end audit activities by 2028. That is an ambitious timeline, but it also sounds realistic in the sense that large organizations rarely turn on a global model all at once. They phase it in, refine the controls, and then expand use cases as confidence grows. The next two years will likely determine whether EY’s rollout becomes a case study or a cautionary tale. (ey.com)

Microsoft’s Role in the Stack​

Microsoft is not simply a vendor here; it is a structural enabler. EY says the new system is built on Azure, Foundry and Fabric, which suggests the firm wants a cloud-native, data-native and agent-native foundation for Assurance. That choice also aligns with Microsoft’s broader push to become the default platform for enterprise AI workflows. (ey.com)
The alliance has already been used to power earlier Assurance releases, including AI-enabled analytics, search and summarization, and document intelligence. In 2023, EY said its Assurance technology stack had expanded with Microsoft Fabric and Azure-based performance improvements, and that the firm was using Azure OpenAI Service and early access to Microsoft 365 Copilot internally. The current move looks like the next logical step rather than a sudden pivot. (ey.com)

Why Foundry matters​

Microsoft Foundry is important because it points to an operationalized AI-building environment, not just model access. For audit use cases, that means stronger prospects for orchestration, integration and governance across multiple agents and workflows. In a profession where traceability matters as much as speed, platform choice shapes the control model. (ey.com)
Fabric also matters because audit is a data problem as much as a reasoning problem. If the firm can unify structured and semi-structured evidence into a coherent analytics layer, AI can do more than summarize documents. It can help spot patterns across ledgers, controls, disclosures and journal-entry behavior in ways that are hard to scale manually. (ey.com)
The competitive implication is simple: Microsoft is helping EY convert a professional-service workflow into a highly instrumented digital system. That gives EY a story about scale, governance and productivity that rivals can copy only if they match the same platform depth. In enterprise AI, integration is often the moat, not model novelty. (ey.com)
  • Azure provides the cloud base.
  • Fabric supports unified data handling and analytics.
  • Foundry helps coordinate the AI layer.
  • EY Canvas remains the user-facing workflow environment.

The partner ecosystem effect​

EY’s tie-up with Microsoft also reinforces the idea that assurance is becoming a platform business. The firm is not building everything from scratch; it is embedding itself in a broader ecosystem of cloud, model and governance tooling. That can accelerate innovation, but it also increases dependency on a single technology stack and its roadmap. (ey.com)
That dependency is not inherently bad, especially if the platform is mature and resilient. But it means EY must stay vigilant about portability, vendor lock-in and model governance. If enterprise AI infrastructure keeps changing as quickly as it has over the past year, the ability to adapt without losing control becomes a strategic advantage in its own right.

Audit Quality, Human Judgment and Control​

EY is careful to say that AI is being added alongside a modernized audit methodology rather than replacing professional skepticism. That distinction is essential. Assurance works because humans evaluate judgment calls, estimate uncertainty and consider evidence in context; software can assist with that process, but it cannot own the opinion in any meaningful governance sense. (ey.com)
The firm’s responsible AI principles help explain why it is emphasizing controls. EY says those principles guide its own use of AI, not just client advice, and the new deployment is said to align with those standards. That should reassure clients who worry about hallucinations, poor traceability or inconsistent outputs, but reassurance only goes so far unless the workflow includes rigorous review and documentation. (ey.com)

The audit opinion still belongs to people​

This may sound obvious, but it is the core point. An AI agent can improve planning, highlighting and drafting, yet the audit conclusion still needs accountable human sign-off. That is not just a legal safeguard; it is a trust safeguard, because capital markets rely on the perception that someone with expertise and independence has actually exercised judgment. (ey.com)
The biggest danger is not that AI replaces auditors. It is that auditors become overconfident in machine-generated outputs and stop interrogating the evidence deeply enough. In a profession defined by skepticism, automation bias is one of the most serious intangible risks. EY’s challenge is to make the tools useful without making them persuasive in the wrong way. (ey.com)
A second issue is explainability. If an agent flags a risk, the auditor has to understand why. If the path from input data to conclusion is not intelligible, the system may accelerate workflow but weaken assurance defensibility. That is why provenance, logs and review checkpoints are not optional extras; they are the product.
  • Human sign-off remains central.
  • Evidence provenance must stay visible.
  • Review controls must be stronger, not weaker.
  • AI should support skepticism, not flatten it.

Assurance for AI, not just with AI​

EY is also broadening its work for clients that are building AI systems of their own. That means assurance is expanding beyond traditional financial statements into diagnostics, governance, risk management and controls for AI programs. This is a major strategic move because the future audit market will likely include both the auditing of AI-enabled businesses and the auditing of AI itself. (ey.com)
In other words, EY is not only using AI to audit companies; it is preparing to audit the technologies companies increasingly depend on. That opens a new service lane, but it also raises the bar for competence. If clients are deploying enterprise AI at scale, they will expect auditors to understand model drift, access restrictions, training-data integrity and control design. (ey.com)

The Singapore Angle and the Talent Equation​

EY’s Singapore commentary is especially interesting because it links global AI strategy to workforce transformation. The firm argues that AI skills can shift professionals toward higher-value work by reducing time spent on routine tasks, and it explicitly ties this to Singapore’s broader encouragement of AI adoption in accountancy and legal services. That frames the rollout not only as a technology story, but as a labor-market story.
That connection matters because the first question many professionals ask about agentic AI is not whether it works, but what happens to their jobs. EY’s answer is that the work changes shape rather than disappears: less manual checking, more interpretation, more client-facing analysis. Whether that promise holds will depend on how leaders redesign roles, training and promotion paths. (ey.com)

Skills, training and adoption​

EY says it has created a global training program for audit and technology risk professionals, with in-person and immersive learning that will be updated as regulations and methodology evolve. That is a smart move, because AI adoption in assurance is as much a training challenge as a software challenge. People have to learn how to supervise agents, question outputs and identify when automation is overreaching.
The talent implication extends beyond EY itself. If agentic AI reduces the share of time spent on repetitive audit tasks, then firms may hire for a different mix of skills: data literacy, risk interpretation, controls thinking and client advisory depth. That could make the profession more attractive to some candidates and more intimidating to others. In practice, it may do both.
There is also a geographic angle. Singapore and other jurisdictions that actively promote digital transformation may become early testing grounds for AI-enabled professional services, because regulators and employers are often more open to structured innovation. That does not mean standards soften; it means firms that can show strong governance may gain an early competitive edge.
  • Training is now part of the product rollout.
  • Workforce redesign will follow technology deployment.
  • Higher-order judgment skills become more valuable.
  • AI literacy is becoming a baseline professional skill.

Enterprise versus consumer impact​

For consumers, this story is mostly invisible except when it affects the reliability of companies they invest in, buy from or work for. For enterprises, it is immediately material. EY’s clients may see faster requests, better-targeted evidence collection and more AI-aware assurance work, but they will also face a tougher standard for documenting governance and control environment maturity. (ey.com)
That difference matters because enterprise AI is not about novelty. It is about operational resilience, regulatory confidence and board-level accountability. If EY’s rollout works, it could make audits faster and smarter without making them feel less rigorous. If it fails, it could create a new category of “AI-washed” assurance that looks modern but fails at the first serious test. (ey.com)

Industry Implications​

EY’s move will almost certainly influence the rest of the profession. Big Four firms compete on trust, technology and talent, and once one major player frames agentic AI as part of the future of audit, the others have to respond. The market may not converge on the exact same architecture, but the direction of travel is becoming hard to miss. (ey.com)
The competitive angle is not limited to rival firms either. Enterprise software vendors, cloud providers and data-platform companies are now part of the audit-value chain. That means the audit market is no longer just a people business with some software added; it is increasingly a platform business where data orchestration and AI tooling shape the client experience. (ey.com)

What rivals will likely copy​

Expect competitors to copy the broad pattern rather than the exact implementation. The pattern is: centralize the audit workflow, embed AI into core tools, formalize responsible AI controls, and tie the rollout to a broader transformation narrative. That playbook is attractive because it promises both cost leverage and quality improvement. (ey.com)
But there is a catch. Once AI becomes part of the audit core, firms need stronger documentation, stronger escalation paths and stronger model governance. The operational gains can be real, but only if the risk controls scale as fast as the automation. Otherwise, firms may simply be moving faster into the same old problems.
  • Larger firms will likely deepen platform partnerships.
  • Audit-tech investment will become a talent differentiator.
  • Assurance on AI systems will grow into a major service line.
  • Regulators will expect clearer controls and traceability.

Regulation will shape adoption​

The regulatory environment will decide how far these systems can go. Audits are already highly regulated, and any AI layer inserted into evidence gathering or risk evaluation will have to survive scrutiny from regulators, inspection bodies and internal quality reviewers. EY’s emphasis on responsible AI is therefore not just ethical branding; it is a practical precondition for adoption. (ey.com)
That scrutiny could become stricter as AI-specific assurance work expands. Once firms begin auditing clients’ models, data pipelines and governance controls, they will need standards for what good looks like. EY’s rollout suggests it wants to shape that conversation, not just react to it. (ey.com)

Strengths and Opportunities​

The strongest part of EY’s strategy is that it builds on an existing global platform rather than adding disconnected AI features. That gives the firm a credible path from experimentation to scaled workflow transformation, and it aligns with the reality that audit quality improves when technology is integrated into the process rather than bolted on at the end. (ey.com)
It also positions EY to capitalize on the fast-growing market for AI assurance. As more companies deploy enterprise AI, they will need help understanding governance, controls and model risk, and EY can offer that from both the inside-out and outside-in perspectives. That creates a rare dual advantage: EY can use AI to improve its own audits while advising clients on their AI risk. (ey.com)
  • Scale across 160,000 engagements creates strong learning effects.
  • Integration into Canvas should reduce workflow friction.
  • Microsoft alliance gives EY enterprise-grade cloud and AI plumbing.
  • Responsible AI principles provide a governance story regulators can understand.
  • Training investment can help reskill the assurance workforce.
  • AI assurance services open a new growth channel.
  • Client-zero messaging strengthens credibility with customers.

Risks and Concerns​

The main risk is overconfidence. In assurance, even a small increase in automation bias can have outsized consequences if it causes teams to miss anomalies or rely too heavily on a model’s suggestion. EY is right to emphasize human judgment, but the real test is whether those controls hold up under deadline pressure and heavy engagement volume. (ey.com)
There is also the governance problem. A multi-agent system can be powerful, but it can also become hard to explain if responsibilities blur between model, workflow and human reviewer. If audit teams cannot clearly trace why a recommendation was made, the system’s value may be offset by inspection risk and documentation overhead. (ey.com)
  • Automation bias could weaken skepticism if not managed carefully.
  • Explainability gaps may complicate regulatory review.
  • Vendor dependence on Microsoft could create strategic lock-in.
  • Workforce anxiety may grow if staff see AI as a replacement rather than support.
  • Model drift could affect quality if the system is not continuously monitored.
  • Compliance variation across jurisdictions may slow global rollout.
  • Public expectations may outpace what the technology can safely deliver.

Looking Ahead​

The next stage will be about proof, not promises. EY says the system will support all end-to-end audit activities by 2028, but the market will judge it on whether audits become more insightful, more defensible and less administratively burdensome in practice. If those benefits materialize, the firm will have established one of the most consequential technology shifts in the history of audit. (ey.com)
The other key question is whether EY can keep evolving the governance model as quickly as the technology. The future of assurance will not be decided by who has the most agents, but by who can make those agents trustworthy, inspectable and genuinely useful to both auditors and clients. In that sense, the EY rollout is less an endpoint than a test case for the entire profession. (ey.com)
  • Monitor whether EY publishes measurable audit-quality gains.
  • Watch for new AI assurance offerings for clients.
  • Track how regulators respond to agentic audit workflows.
  • Observe whether competitors launch similar multi-agent platforms.
  • Pay attention to talent and training outcomes inside Assurance.
The broader lesson is that audit is becoming one of the clearest real-world laboratories for enterprise AI. EY is betting that the combination of scale, governance and human oversight can turn agentic systems into a durable advantage rather than a compliance headache. If that bet pays off, it will not just change how EY audits companies; it will help define how trust is built in an AI-heavy economy.

Source: IT Brief Australia https://itbrief.com.au/story/ey-rolls-out-ai-audits-across-global-assurance-business/
 

Back
Top