Executives are telling investors and boards that AI is already embedded in daily operations — but recent research shows a significant portion of the workforce disagrees, and that mismatch is quietly sabotaging the ability of organisations to convert AI enthusiasm into measurable productivity and risk‑managed value.
The conversation about AI at work has moved fast: from speculative articles about job disruption to pragmatic pilots that place generative models inside Word, Excel, and CRM systems. Organisations have rushed to buy licences and announce "Copilot" rollouts, while public statements from leaders have often assumed adoption will follow automatically. That assumption — that technology alone creates usage and value — underpins much of the current executive messaging. Evidence is accumulating, however, that senior leaders are overestimating how often employees actually use AI in daily work and how much autonomy those systems have.
A recent Multiverse study, summarised in coverage across the trade press, is a useful lens on this problem. It found a clear perception gap: 59% of business leaders believe their people collaborate with AI every day, but only 42% of employees report doing so; 23% of executives think staff are delegating whole tasks to AI systems, while only 8% of employees say they are. Those headline numbers point to a mismatch between executive belief and frontline reality.
Why does that matter? Because strategy, procurement, and training budgets are being set based on executive assumptions. When those assumptions stray from reality, spending tilts toward technology procurement and headline pilots — not the operational change, measurement, and people development that actually drive adoption.
Adoption also varies strongly by seniority and role. The Multiverse research highlights that mid‑level managers report much higher daily AI collaboration rates than junior staff — producing a roughly 30 percentage‑point gap between seniority levels in some measures. That pattern is reinforced by other surveys showing managers both experiment with AI more and receive more informal mentoring or self‑directed learning than individual contributors. The result is a widening skills and confidence divide inside organisations: managers become AI power users while juniors lack exposure to practical, job‑relevant AI skills.
This uneven diffusion matters because middle managers are the gatekeepers of daily workflows. If managers use AI but their teams don’t, the organisation risks creating islands of productivity rather than systemic uplift.
That informal learning creates three predictable problems:
Organisations that close the perception gap will capture the real prize of AI: not flashy announcements or purchased licences, but steady, measurable improvement in how work gets done. The path to that prize is mundane and human — clear signals, targeted learning, and relentless measurement — but it is the only reliable route from corporate ambition to durable impact.
Source: Petri IT Knowledgebase Executives Overestimate How Much Employees Use AI at Work
Background
The conversation about AI at work has moved fast: from speculative articles about job disruption to pragmatic pilots that place generative models inside Word, Excel, and CRM systems. Organisations have rushed to buy licences and announce "Copilot" rollouts, while public statements from leaders have often assumed adoption will follow automatically. That assumption — that technology alone creates usage and value — underpins much of the current executive messaging. Evidence is accumulating, however, that senior leaders are overestimating how often employees actually use AI in daily work and how much autonomy those systems have.A recent Multiverse study, summarised in coverage across the trade press, is a useful lens on this problem. It found a clear perception gap: 59% of business leaders believe their people collaborate with AI every day, but only 42% of employees report doing so; 23% of executives think staff are delegating whole tasks to AI systems, while only 8% of employees say they are. Those headline numbers point to a mismatch between executive belief and frontline reality.
What the numbers actually say
The most impactful claims from the Multiverse research are short and stark:- Daily collaboration with AI: 59% of leaders vs. 42% of employees.
- Full task delegation to AI: 23% of executives vs. 8% of employees.
Why does that matter? Because strategy, procurement, and training budgets are being set based on executive assumptions. When those assumptions stray from reality, spending tilts toward technology procurement and headline pilots — not the operational change, measurement, and people development that actually drive adoption.
Where the gap is largest: data, complexity, and seniority
The perception gap is not uniform. Multiverse’s findings — echoed in other industry reports — show the largest mismatches in data‑driven decision making and multi‑step workflow automation. Executives frequently believe AI is being used to synthesize large datasets or to orchestrate complex processes; employees report much lower levels of engagement in those areas. By contrast, routine administrative work (drafting emails, summarising notes) shows a smaller gap — that is, executives are more accurate when it comes to simple use cases.Adoption also varies strongly by seniority and role. The Multiverse research highlights that mid‑level managers report much higher daily AI collaboration rates than junior staff — producing a roughly 30 percentage‑point gap between seniority levels in some measures. That pattern is reinforced by other surveys showing managers both experiment with AI more and receive more informal mentoring or self‑directed learning than individual contributors. The result is a widening skills and confidence divide inside organisations: managers become AI power users while juniors lack exposure to practical, job‑relevant AI skills.
This uneven diffusion matters because middle managers are the gatekeepers of daily workflows. If managers use AI but their teams don’t, the organisation risks creating islands of productivity rather than systemic uplift.
Leadership readiness — informal learning is not the same as mastery
Multiverse and corroborating industry surveys point to another worrying pattern: structured training is sparse, and many leaders learn AI through informal experimentation rather than through formal, role‑based education. Executives often self‑teach by trial and error — a risky approach when the skills to evaluate model outputs, assess data sensitivity, and translate models into governance policies are required.That informal learning creates three predictable problems:
- Poor calibration of expectations. Leaders who have dabbled in prompts or who have seen quick wins can project that experience onto the entire organisation, misreading adoption signals.
- Weak change leadership. Without structured training, leaders lack a shared language and repeatable playbooks to help teams adopt AI safely and effectively.
- Inconsistent governance. Ad hoc exploration produces shadow IT and uncontrolled tool choices, increasing data leakage risk. Industry reporting and community discussion about "clipboard to chat" leaks highlight how unsanctioned use of consumer LLMs can expose sensitive information.
The hidden operational costs of overconfidence
When leaders overestimate adoption, organisations pay real costs:- Misallocated budgets. Buying licences and announcing company‑wide initiatives without ensuring people can use tools produces poor ROI and wasted subscriptions. Recent industry polling suggests many firms have yet to see productivity gains despite heavy investment.
- Policy and governance mismatches. If leaders assume usage is high and centralised while actual use is patchy and shadowed, governance controls will either be misapplied or insufficient, raising compliance, privacy, and IP risks. Reports on shadow AI and data leakage underscore this vulnerability.
- Cultural friction and churn. Employees who lack training can feel the pressure of “do more with AI” mandates without the tools or confidence, increasing stress and reducing trust in leadership’s tech plans. Surveys show many workers feel unprepared for AI changes and worry about fairness and transparency.
Why measuring usage properly matters
Good measurement unlocks three vital capabilities:- Reality checks for leadership. Accurate telemetry and periodic surveys let executives see where AI is actually used, by whom, and for what tasks. That stops leadership from acting on anecdotes and assumptions.
- Targeted training investment. With role‑level usage data, organisations can prioritise training where it will move the needle most — typically among junior staff doing repeatable work or managers who orchestrate team workflows.
- Risk‑aware governance. Measuring which tools are used and how data flows across them enables proportionate controls: DLP for file uploads, approved model endpoints for sensitive queries, and monitoring for high‑risk patterns. Community reporting has shown that without this visibility, data leakage is a prevalent and persistent risk.
Closing the gap: practical steps that work
Organisations that want to move from hype to impact should consider a structured, measurable approach with a people‑first emphasis. The following roadmap combines policy, training, and measurement into an operational plan.1. Build an AI usage baseline
- Instrument approved AI tools to capture safe, privacy‑respecting usage metrics.
- Run an anonymised employee survey that asks how often and for what tasks people use AI.
- Cross‑check against procurement and license activation data.
2. Empower managers as adoption multipliers
- Make managers the focal point for applied, job‑specific training.
- Train managers to coach their teams on prompt patterns, model reliability checks, and ethical use.
- Reward managers for demonstrated team‑level AI impact (time saved, error reduction, quality improvements).
3. Shift from one‑off training to learning pathways
- Replace generic "what is AI" workshops with role‑specific learning pathways that map skills to daily workflows.
- Use short, applied modules tied to concrete tasks (e.g., "Use AI to draft and validate a client proposal" or "Use AI to analyze an ops dashboard").
- Include hands‑on labs, internal templates, and real‑task assessments.
4. Govern in proportion to risk
- Classify use cases by data sensitivity and impact.
- Approve model endpoints and define DLP/monitoring for high‑risk categories.
- Make policies clear, short, and task‑oriented (e.g., “Do not upload PII to public playgrounds; use approved model X for redacted data analysis”).
5. Tie adoption to measurable business outcomes
- Define KPIs up front: time saved, response quality, customer NPS improvement, error reductions.
- Run short controlled pilots that measure those KPIs and validate scaling assumptions.
- Use FinOps-style reviews to assess license spend vs. measured gains.
Cultural and ethical considerations
Closing the adoption gap is not purely technical; it is cultural. Workers who see AI as an additional expectation without training will resist or use tools in unsafe ways. To mitigate that:- Communicate transparently about how AI will affect jobs and the support available.
- Involve employees in choosing approved tools and building templates that reflect real workflows.
- Establish clear redress channels for flawed AI outputs (errors that affect customers or internal decisions).
Common objections and how to answer them
- “We already gave people a they should learn on the job.”
Response: Ad hoc learning produces patchy results. Short videos can raise awareness but won’t teach the pattern recognition and critical evaluation skills needed for complex decision support. Role‑based, applied learning is more effective. - “Managers are the bottleneck — we can’t train everyone.”
Response: Train managers to teach — use a train‑the‑trainer model combined with micro‑learning for individuals. Measurement ensures those managers are spreading real, actionable skills. - “We’ll just ban consumer tools and force everyone to use approved platforms.”
Response: Bans often drive shadow usage. Pairing clear, convenient sanctioned alternatives with monitoring and education reduces risky behaviour and brings usage into governance scope. Community reports of clipboard‑to‑chat leakage highlight the dangers of purely prohibitive approaches.
The role of IT, L&D, and HR — a coordinated playbook
Effective AI adoption requires cross‑functional execution:- IT / Security: Approve endpoints, deploy DLP, and expose safe, managed models.
- Learning & Development (L&D): Design role‑specific pathways and assessments.
- HR / People Ops: Align performance frameworks and career development to AI skills.
- Business Units: Define workflows and own KPIs for pilots and scale programs.
A cautionary note: what we still don’t know
Survey work is valuable but imperfect. Self‑reporting biases, sample frames, and the rapidly evolving tool landscape can skew numbers. Some reporting suggests that in certain regions or industries usage may be higher or lower than the aggregate indicates. Organisations should treat external studies as helpful signals, not prescriptive mandates. Where claims cannot be verified for a specific context, leaders must test locally before making broad bets.Conclusion — from confident proclamations to calibrated action
The headline is simple: executives are too optimistic about how widely and deeply AI is used inside their organisations. That optimism isn’t merely benign — it shapes budgets, policies, and expectations in ways that can reduce the chance of success. The antidote is not slower buying or less ambition, but smarter execution: measure usage, invest in role‑specific upskilling, empower managers, and govern proportionately.Organisations that close the perception gap will capture the real prize of AI: not flashy announcements or purchased licences, but steady, measurable improvement in how work gets done. The path to that prize is mundane and human — clear signals, targeted learning, and relentless measurement — but it is the only reliable route from corporate ambition to durable impact.
Source: Petri IT Knowledgebase Executives Overestimate How Much Employees Use AI at Work