• Thread Author
EY’s Simon Brown frames the challenge clearly: agentic AI is no longer an abstract tech trend — it’s a workforce engine that will rewire HR, L&D and organizational culture, and the time to prepare is now.

Background​

Simon Brown, EY’s Global Learning & Development leader, has spent the last two years building programs to ready EY’s roughly 400,000-strong workforce for an AI‑driven future. That headcount and his remit are listed on EY’s official people pages, which describe him as leading learning for “400,000 people in more than 150 countries.” (ey.com)
Brown’s central argument — distilled in his recent interview — is twofold. First, the technology is evolving from narrow automation to agentic AI: multi‑step, autonomous assistants able to plan, act and orchestrate across systems. Second, the real organizational barrier isn’t just models and compute; it’s culture, change management and the way HR designs careers and incentives. Those themes mirror broader industry analyses that place culture, governance and role redesign at the centre of responsible adoption.
This feature unpacks Brown’s recommendations, verifies technical claims he and others make, evaluates the upside and risks, and gives HR and IT leaders a concrete playbook to prepare for agentic AI in the enterprise.

What Simon Brown actually said — the nutshell summary​

  • Leaders must ask whether people know what’s possible with agents, whether teams are experimenting, and whether the culture supports safe experimentation and failure.
  • Adoption is a cultural signal: who is role‑modelling AI use, and are front‑line employees encouraged to explore tools like Copilot?
  • Change management matters more than ever; technology alone won’t deliver benefits if people are fearful or unsupported.
  • Brown describes a three‑loop framework for value: (1) do current tasks cheaper/faster/better, (2) realize new value (more customers/new products), (3) reinvent the business if AI becomes pervasive.
  • HR must move from job‑level planning to task‑level audits, reskilling incentives, and internal mobility frameworks that preserve careers while harvesting AI productivity.
These points echo practical roadmaps widely recommended for AI in HR — start small, instrument outcomes, harden governance and scale only after validation.

Why this matters now: technology is moving faster than many leaders realize​

Two claims from Brown’s interview require verification because they change the scale and timing of response:
  • “AI capabilities double every six months.”
  • “Copilot now has GPT‑5 access,” which dramatically alters agent performance.
Both claims are time‑sensitive and deserve corroboration.

How fast are AI models improving?​

Leading industry voices and research show that AI capabilities are accelerating — but the exact doubling interval depends on the metric used.
  • Microsoft leadership and coverage of Microsoft Ignite have publicly stated that AI scaling has compressed to a cadence far faster than traditional Moore’s Law; Satya Nadella described a doubling on the order of months rather than years. This is the origin of the “doubling every six months” narrative in many corporate keynotes. (computerweekly.com)
  • Independent research groups have measured different doubling windows depending on task type and benchmark. Some recent studies show task‑time horizons roughly halving every 4–7 months for certain agent benchmarks; others aggregate across broader metrics and find slower but still steep improvement. Those studies are valuable but specific — they do not establish a universal “six‑month law” across all capabilities. See academic and reporting summaries that show doubling behavior but with varying intervals. (arxiv.org, livescience.com)
Bottom line: models and agent capabilities are improving extremely quickly and unpredictably. Leaders should plan for rapid capability shifts rather than the annual upgrade cycles common in enterprise IT. Phrase the doubling claim as an operational observation (useful planning heuristic) rather than an immutable law — and treat it with caution in vendor conversations.

Is Copilot using GPT‑5?​

OpenAI announced GPT‑5 and characterized it as a meaningful step forward in multimodal reasoning and agentic capabilities; Microsoft announced that GPT‑5 is available in Copilot Studio on day‑one, and that Microsoft 365 Copilot environments would be updated to leverage GPT‑5 in some configurations. Those product announcements confirm the core claim that leading enterprise copilots now have access to the newest large models from OpenAI, which materially shifts baseline agent performance. (openai.com, microsoft.com)
This has practical consequences: if your enterprise is tied to older model versions, your users will experience a significant capability gap compared with colleagues who run GPT‑5‑backed copilots. That alone can drive shadow‑IT and rapid, ungoverned adoption of consumer tools.

Culture, experimentation and the human side of agents​

Brown’s clearest and most actionable message is cultural: HR must engineer an environment where people can safely experiment with agents.

What to look for (culture checklist)​

  • Leadership role modelling. Are senior leaders regularly using and showcasing copilots and agents? Adoption metrics usually mirror leadership behavior.
  • Experimentation rituals. Are there recurring hackweeks, L&D sprints or AI task forces that explore agentic workflows?
  • Time to learn. Has the organization protected time for employees to train, experiment and surface failures?
  • Psychological safety. Are mistakes treated as learning signals rather than disciplinary failures?
When these building blocks are missing, Brown warns, even the most advanced tools will underdeliver. That’s a recurring theme in enterprise case studies: technology without culture is a wasted investment.

The governance and compliance imperative​

Agentic AI increases the surface area for privacy, bias, auditability and safety risks. Brown’s team at EY — and numerous independent best‑practice sources — recommend an integrated governance stack:
  • Data classification + DLP rules that prevent posting sensitive PII to public LLMs.
  • Logging and provenance for agent decisions so auditors can reconstruct “why” and “how.”
  • Human‑in‑the‑loop gates for consequential outcomes (hiring, promotion, termination).
  • Regular bias/fairness testing and independent audits for any model used in HR decisions.
Regulatory reality is also shifting: the EU AI Act already targets personnel systems as high‑risk, requiring documentation, testing and human oversight. U.S. regulators (EEOC) are likewise scrutinizing employment‑related AI for civil‑rights compliance. HR leaders must treat compliance as a core design constraint, not an afterthought.

Skills, workforce planning and role redesign​

Brown reframes workforce planning from “who holds a job title” to “which tasks are impacted.” That task‑level view is essential.

Three categories HR needs to map​

  • Tasks that AI will likely automate or materially accelerate.
  • Tasks that remain uniquely human (empathy, complex judgement, stakeholder management).
  • New tasks and careers created by AI (prompt engineering, agent orchestration, model‑audit roles).
The World Economic Forum’s Future of Jobs Report projects both displacement and net job creation: by 2030, WEF estimates 170 million new roles created and 92 million displaced for a net increase of 78 million jobs. That pattern — large churn, not wholesale elimination — tracks Brown’s observation that job content will shift more than disappear. HR must design reskilling and mobility pathways at scale. (weforum.org)

Practical workforce steps HR must own​

  • Task‑level audits (not just job titles) to quantify automation exposure.
  • Career mapping that shows lateral redeployment options before considering layoffs.
  • Incentives and time budgets for reskilling — learning without schedule protection is ineffective.
  • Internal credentialing (badges, micro‑credentials) tied to performance review and mobility.

Measuring success: Brown’s three loops (how to tie pilots to business outcomes)​

Brown’s investment framework is simple and pragmatic:
  • Loop 1: Do existing tasks cheaper/faster/better (productivity metrics).
  • Loop 2: Unlock new value (new clients, faster product launches, broadened services).
  • Loop 3: Reinvent the business when agentic AI is ubiquitous (new operating models).
Measure pilots against the loop they target. If Loop 1 yields “same output, lower cost,” that’s a valid success metric. If Loop 2 enables previously impossible work (e.g., new client insights or new product features), that’s strategic value. HR should ensure KPIs, incentive schemes and headcount models reflect the loop in play, not raw output alone. Examples exist where third‑party risk teams use agents to generate much broader supplier analyses without larger headcount — a Loop 2/3 outcome.

Common blind spots and hard truths​

Brown and other leaders identify several recurring blind spots:
  • Executive teams underestimate how fast models are improving, because procurement/security cycles often delay access by months. Those delays produce knowledge gaps.
  • Organizations often treat AI like a static product (annual upgrades) rather than a continuously improving platform. That mismatch breaks change management and governance.
  • Shadow AI adoption (employees using consumer ChatGPT or standalone agents) is a major risk if enterprise tools are slow, clunky or blocked. Provide secure, high‑quality experiences to prevent shadow use.
  • Overreliance creates deskilling: if managers lean on agents for judgement work without oversight, institutional decision‑making capacity erodes. Maintain learning objectives that keep human skills sharp.

Risk matrix — what could go wrong, and how to mitigate​

  • Data leaks and compliance breaches — enforce DLP, tenancy isolation and contractual model‑use terms.
  • Algorithmic bias in recruitment and performance systems — require fairness testing, impact assessments, and independent audits.
  • Rapid model updates changing behavior — gate model upgrades into staged rollouts with pre‑production testing.
  • Cultural backlash and fear — invest in transparent communication, explainability and participatory change design.
These mitigations are operational and non‑negotiable for regulated or high‑stakes HR use cases.

A practical playbook for HR leaders (step‑by‑step)​

  • Convene a cross‑functional AI governance task force (HR, IT, legal, security, employee representation). Document charter and decision rights.
  • Run a rapid task audit (30–60 days): map high‑frequency, low‑risk HR tasks (payroll queries, benefits FAQs, scheduling) suitable for early agent pilots.
  • Choose one secure vendor path and one experiment path: enterprise‑grade sanctioned tooling for production, and a sandboxed experimentation environment for innovation. Institute DLP and logging for both.
  • Protect learning time and create internal credentials: launch a staged AI badge and curriculum for managers and HR teams; make base certifications prerequisite for adoption.
  • Pilot with clear metrics tied to Brown’s loops: track time saved, error rate, client impact, employee NPS and redeployment rates. Measure before/after using control groups when possible.
  • Gate escalation: require human sign‑off on any agent output that affects pay, hiring, termination or legal status. Build appeal routes and audit trails.
  • Publicly publish an “AI at work” policy that clarifies permitted uses, data handling, and employee rights. Transparency reduces fear and legal risk.

Investment calculus: how to decide where to spend money​

Use the three‑loop lens:
  • Short bets (Loop 1): invest modestly in copilots for high‑frequency admin tasks — expect swift payback if instrumented correctly.
  • Medium bets (Loop 2): fund cross‑functional pilots that create new client value (e.g., people analytics copilots). Expect longer validation timelines and higher governance costs.
  • Large bets (Loop 3): allocate strategic funds to rearchitect operating models only when agents are widely adopted across comparable firms or when competitive parity requires it.
Large capital commitments signal seriousness but don’t guarantee outcomes without operational rigor. Measure ROI not only in cost reduction but in value delivered (new services, customer satisfaction, speed of decision).

What to say to the board — six crisp messages​

  • Agentic AI is not a one‑off project; it’s a platform that will evolve continuously. Expect capability shifts measured in months. (computerweekly.com, livescience.com)
  • We will pursue a phased adoption: prove safety and value on low‑risk tasks before scaling.
  • Reskilling and protected learning time are strategic investments that reduce future severance and redeployment costs.
  • Governance and auditability are cost centers that prevent far larger legal and reputational loss.
  • We will measure success through business outcomes (the three loops), not raw adoption metrics.
  • We will provide sanctioned, secure AI experiences to eliminate the shadow‑IT vector.

What remains uncertain — and what leaders must monitor​

  • The rate of capability improvement remains noisy and benchmark dependent. Treat predictions of exact doubling intervals as planning heuristics, not guarantees. Several reputable studies and executive statements point to a high cadence of improvement, but intervals vary. Prepare for surprise. (arxiv.org, theaidigest.org)
  • Regulatory regimes are evolving rapidly — EEOC guidance, the EU AI Act and national laws will change procurement and audit requirements. Keep legal counsel continuously engaged.
  • Vendor landscapes will shift as hyperscalers (and alternate model providers) extend partnerships; lock‑in risk and supply diversity should be considered when negotiating SLAs. Recent reporting shows hyperscaler partnerships diversifying across model providers, increasing negotiation leverage for buyers. (reuters.com)
Where claims cannot be independently verified — for example, precise productivity multipliers vendors sometimes advertise — treat vendor ROI numbers as hypotheses to be validated by your own pilots and instrumentation.

Final assessment — the balanced verdict​

Agentic AI presents one of the most consequential workforce transitions in the modern era. The upside is clear: routine cognitive load can be shifted to agents, freeing people for higher‑value judgment, creativity and stakeholder work. The risk is also clear: if adoption proceeds without culture change, governance, and measurable reskilling, organizations will either under‑capture value or expose themselves to legal, reputational and operational failures.
Simon Brown’s practical emphasis — measure, protect learning time, role model behavior, and use staged governance — is a sound north star. Leaders should act urgently but deliberately: secure enterprise-grade copilots, run tightly instrumented pilots focused on business outcomes, and rewire HR processes for continuous reskilling and internal mobility.
The era in which AI is a “nice to have” is ending. For HR leaders who build the right culture, governance and learning pathways now, agentic AI will be a tool that lifts people and organizations. For those who don’t, the technology will still arrive — but the organization will be reacting rather than shaping its future.

Practical appendix — quick checklist to get started this quarter​

  • Convene governance task force and publish charter (30 days).
  • Complete task audit and pick 2–3 low‑risk pilot use cases (60 days).
  • Deploy sanctioned Copilot/enterprise agent with DLP and logging; open sandbox for experiments. (microsoft.com)
  • Launch foundational AI badge curriculum for HR and managers; require completion for pilot owners.
  • Instrument KPIs mapped to Loop 1/2/3 and publish a monthly dashboard to the executive team.
This plan converts the strategic urgency Brown outlines into concrete, measurable steps that protect people while unlocking the productivity and innovation benefits of agentic AI.

Source: worklife.news EY's Simon Brown on preparing HR departments for the agentic AI revolution