Closing the Confidence Capability Gap in UK Workplace AI

  • Thread Author
A fresh survey from digital adoption firm Hable reveals a striking paradox at the heart of UK workplace AI adoption: employees report high confidence using AI, yet organisations are failing to match that confidence with training, formal rollouts or strategic planning. The result is a widening confidence–capability gap that exposes businesses — and public-sector bodies in particular — to governance, productivity and safety risks unless IT leaders act quickly and pragmatically.

Office scene with an AI assistant on a laptop and governance hub dashboards alongside AI strategy displays.Background / Overview​

Hable’s snapshot — gathered from 262 UK workers and skewed towards the public sector — found that 71% of respondents said they felt somewhat or very confident using AI tools at work, while only 32% reported receiving any formal AI training or resources to use those tools effectively. Company communication about AI appears more common than practical deployment: 60% had received company-wide communications about AI, yet nearly half said their IT departments had not formally rolled out AI tools. Crucially, only 41% said their organisation had an AI strategy going into 2026, a figure Hable framed as “alarming”. These headline ratios point to a workforce that is eager and experimental, but under-supported by structured learning, governance and strategic direction.
This pattern mirrors other recent UK surveys that show public- and private-sector organisations often feel confident about AI adoption while lacking operational controls and investments in model-level security, training and governance. Those complementary studies warn that self-reported confidence rarely equates to institutional readiness.

Why this matters: the practical stakes for IT leaders​

AI adoption is not just another software rollout. Generative and assistant‑style tools change the how and who of knowledge work: they reshape day-to-day workflows, expand attack surfaces, and create new audit and records‑management challenges. When staff experiment with consumer tools or embedded assistants without training, the likely consequences include:
  • Data leakage and regulatory exposure — staff pasting sensitive text or documents into consumer chatbots can lead to unintended sharing or model training of confidential data. Enterprise-grade contracts, tenant isolation and DLP controls are necessary to close that gap.
  • Operational errors from hallucinations — well-phrased but incorrect outputs (hallucinations) can cause reputational damage or faulty decisions if human verification is not mandated. Short, awareness-only training that omits verification skills can make this worse.
  • Inconsistent adoption and wasted licences — organisations that issue Copilot‑style licences without role-based enablement frequently see uneven impact and low ROI; training and governance convert licences into measurable gains.
  • Shadow AI and auditability problems — ad‑hoc use of consumer AI services undermines enterprise audit trails and complicates incident response. Inventory and whitelisting are baseline controls.
Put simply: employee confidence is a productive asset, but without strategy, it becomes a liability.

The “confidence–capability gap” explained​

Hable’s framing of a confidence–capability gap is useful because it captures two simultaneous trends:
  • Rapid, consumer-led diffusion of AI literacy and tool familiarity — many workers learn tools like ChatGPT or embedded assistants outside formal training, building practical confidence.
  • Slow organisational deployment of structured learning, procurement hardening and governance — meaning capability at scale (consistent, auditable, secure use across functions) lags.
Other UK-focused studies and advisory reports show the same tension: leaders report belief that they’ve invested enough in AI, but fewer have embedded model-level security, upskilling roadmaps or documented strategies for governance and responsible use. That mismatch elevates the probability of incidents and reduces the chance of converting daily time savings into durable productivity improvements.

What the numbers tell us (and what they don’t)​

Hable’s survey numbers are headline-grabbing and directionally valuable, but it’s important to interpret them correctly.
  • High user confidence (71%) reflects broad exposure to consumer AI and embedded features in ubiquitous products (for example, copilots inside productivity apps), not necessarily formal training or institutional competency. Confidence ≠ competence.
  • Only 32% reported training or resources — a practical red flag. Without role-based onboarding and hands‑on practice, users may misapply tools or fail to verify outputs.
  • Nearly half reporting no formal IT rollout suggests that many teams are self-serving AI adoption — local champions or individuals enabling tools ad hoc — rather than a managed, enterprise rollout with DLP, contract safeguards, and telemetry.
  • The 41% figure for organisations with an AI strategy shows that a majority still lack documented, cross‑functional plans covering procurement, risk tiers, upskilling and measurement. This aligns with other research showing systemic underspend or misalignment on AI governance.
Where Hable’s survey is strong is in highlighting employee sentiment and the mix of excitement, apprehension and confusion — useful signals for L&D and IT teams designing interventions. Where it is limited (as with many short surveys) is in the ability to prove causation: we don’t yet know whether confidence levels are translating into safer or more productive outputs at scale. For that, instrumented pilots and telemetry are needed.

Mixed emotions: appetite and anxiety in equal measure​

Hable reports employee emotions as roughly balanced between apprehension (45%) and excitement (43%), with a smaller cohort feeling confused (12%). That emotional split is significant:
  • Apprehension correlates strongly with lack of organisational support — among the apprehensive, a large majority reported no training or resources. Addressing emotional barriers is largely a managerial and training problem, not a technology one.
  • Excitement reflects perceived upside — Hable found that respondents most commonly expect AI to make day-to-day work easier (45%) and improve productivity (44%). Yet alongside this, meaningful shares worry about job displacement (27%) or expect new growth opportunities (26%). These contradictory expectations underscore why training must combine productivity skills with career-transition and reskilling pathways.
Other UK studies reinforce the social dimension: training that links to demonstrable artifacts (prompt portfolios, automation demos, role-based projects) increases both confidence and employability, while awareness-only badges risk tokenism.

The governance angle: why IT teams must be proactive​

The Hable findings underline a governance imperative for IT and security teams:
  • Build and publish an approved‑tools list and map allowed use cases by risk tier; forbid pasting of PHI, PCI and source code into public chat endpoints. Enforce via DLP and conditional access.
  • Require mandatory, scenario-based training before granting access to enterprise copilots; consider short role-based primers for low-risk users and deeper modules for stewards and model owners. HMRC’s approach — a 90‑minute mandatory Copilot module for users — is a practical example of tying access to training.
  • Capture telemetry and create audit trails for prompts and outputs where outputs influence decisions or external communications. This is essential for accountability, FOI/records compliance, and post-incident forensics.
  • Treat AI as a product that needs lifecycle ownership: inventory all AI endpoints, appoint accountable owners, and measure both value (time saved, task automation) and risk (near misses, hallucination incidents).
These are not theoretical steps — more mature public-sector pilots that have paired tenant‑grounded copilots with training and governance report measurable time savings and safer rollouts. That operating model is reproducible if IT leaders prioritise it.

Practical, step-by-step guidance for closing the gap​

  • Run an AI asset discovery sprint (2–4 weeks) to map where assistants and LLMs are used, sanctioned and unsanctioned. Capture connectors, RAG sources and tenant endpoints.
  • Build a short mandatory baseline course for all users (30–90 minutes) that covers: permitted inputs, prompt hygiene, verification steps, and how to escalate suspect outputs. Use a blended model (micro‑modules + applied task).
  • Create role-based follow-ups: Copilot “power user” tracks, administrator security tracks (DLP, connectors), and steward courses for CoE members. Make advanced modules required for connectors and tenant-level roles.
  • Implement technical controls in parallel: tenant-bound copilots where possible, non-training contractual clauses in procurement, DLP rules for chat endpoints, and logging/retention policies for prompt data.
  • Pilot measurable use-cases (6–12 weeks) with KPIs: time saved per task, error rate of AI outputs, and number of governance incidents. Gate scale-up on meeting outcome thresholds.
  • Publish an enterprise AI policy that covers acceptable use, escalation, privacy, retention and human-in-the-loop sign-off points; ensure HR and legal are involved.
These steps convert ad hoc enthusiasm into structured adoption and close the confidence–capability gap by giving people not just permission to use tools, but the skills and guardrails to use them well.

Training design that works: principles and examples​

  • Make training applied and role-specific. Short, case-based modules tied to a concrete task (e.g., draft an email using Copilot, then validate sources) build skills faster than generic theory. CompTIA’s “AI Essentials” and other vendor-neutral packages show the value of scenario-driven learning paired with a short competency check.
  • Use artifacts, not just certificates. Require learners to submit a small portfolio (prompt library, example automation, annotated output) as proof of capability. This raises the value of any badge.
  • Pair learning with tenant sandboxes. Let learners practice with sanitized organisational data in a controlled environment. Sandboxes reduce shadow AI and accelerate meaningful skill transfer.
  • Measure employer recognition. Get HR and hiring managers to map training to role profiles so that completed training maps to career development or allocation of duties.
Case example: organisations that combined Copilot licences with mandatory training and local champions reported faster, safer adoption and clearer ROI — HMRC’s early rollout is an instructive model because it linked training to licence entitlement and tenant‑grounded data access.

Strategic considerations for boards and CISOs​

Boards must stop treating AI as a purely operational or HR problem. The Hable results should be read as a board-level wake up call: employee readiness is outpacing institutional readiness, which means the board should demand:
  • A short, auditable AI strategy document tied to business outcomes and risk tolerances.
  • Evidence of training coverage by role and measurable artefacts that demonstrate competence.
  • A budget line for AI governance (model security, telemetry, training, independent testing).
  • Regular reporting on AI incidents, near-misses, and model-drift metrics.
Without these elements, organisations risk a high-profile misuse or data leak that could have been prevented with modest investments in training, procurement discipline and telemetry.

Risks to avoid and red flags to watch​

  • Relying on completion-only badges without work products — this dilutes credibility and fails to build real capability.
  • Treating vendor marketing statements as contractual guarantees — insist on non‑training clauses, deletion rights and audit access.
  • Over‑instrumenting surveillance (tracking “how much AI an employee used”) — governance should avoid creating cultures of distrust; measure outcomes, not usage alone.
  • Ignoring the digitally excluded — online-only offers leave gaps for those without devices, bandwidth or basic digital skills; local in‑person hubs and device support are essential for equity.

Conclusion: turn confidence into capability — and fast​

Hable’s survey is a useful, timely signal: UK workers are ready and curious about AI, but organisations are not yet ready to harness that energy responsibly and at scale. The solution is not more marketing or permissive encouragement; it is a pragmatic programme that pairs:
  • a concise AI strategy,
  • role‑based, artifact-focused learning pathways,
  • enforceable procurement safeguards, and
  • measured pilots that gate scale on real outcomes.
IT leaders can close the confidence–capability gap in three practical moves this quarter: run a short AI asset inventory, roll out mandatory baseline training for all users tied to access controls, and launch two instrumented pilots (one low-risk productivity use case, one higher-value function) with clear KPIs. Those steps convert isolated enthusiasm into repeatable value while reducing legal, privacy and operational risk — and they ensure that employees’ confidence becomes an organisational asset rather than an avoidable exposure.


Source: IT Brief UK https://itbrief.co.uk/story/uk-staff-confident-with-ai-but-lack-training-strategy/
 

Back
Top