Seven in ten doctors tell the Royal College of Physicians that the NHS is not digitally fit to deploy AI safely, exposing a dangerous mismatch between clinical enthusiasm and the health service’s broken technical foundations.
The Royal College of Physicians’ new "RCP view on digital and AI" report draws on a June 2025 snapshot survey of its members and follow-up analysis to conclude that clinicians broadly support the potential of artificial intelligence — but overwhelmingly doubt the NHS’s current ability to integrate those tools in ways that improve care safely and equitably. The headline findings are stark: roughly 70% of surveyed physicians are supportive of AI adoption, yet 68% say the NHS lacks the digital infrastructure to introduce AI at scale and 48% strongly disagree that the NHS is ready to integrate the technology. Those numbers are not academic: they reflect the experience and judgement of practicing clinicians who see everyday friction caused by incompatible Electronic Patient Records (EPRs), out-of-date infrastructure, and inconsistent local governance. The RCP frames the problem as systemic — not a vendor failure or a single procurement misstep — and calls for urgent, coordinated action: modern infrastructure investment, a standard EPR content model, NHS‑approved algorithm registries, and clinician-led AI design and training.
Legal accountability must be clarified: clinicians will need clear rules about liability when AI outputs contribute to decisions, and procurement must ensure vendors accept contractual obligations around auditability and data protection. Clear, published incident reporting and transparent independent audits will be critical to maintaining legitimacy.
The path forward is clear and practical: fix the digital basics (EPR standardisation, reliable infrastructure), mandate clinical safety and human-in-the-loop processes, create an NHS‑approved registry of validated tools, and invest seriously in clinician training and independent evaluation. When these elements are in place — not before — AI can move from pilot promise to scale‑safe benefit, reducing administrative burden, speeding diagnoses in well‑defined tasks, and improving patient outcomes.
Policymakers and NHS leaders now face a choice: move quickly to shore up the foundations the RCP has flagged, or risk large-scale AI deployments that are expensive, fragile and potentially unsafe. The clinician community’s message is clear: the technology is promising, but the NHS’s digital foundations are broken — and they must be repaired before AI can deliver on its promise.
Source: UKAuthority NHS not ready for AI - Royal College of Physicians | UKAuthority
Background
The Royal College of Physicians’ new "RCP view on digital and AI" report draws on a June 2025 snapshot survey of its members and follow-up analysis to conclude that clinicians broadly support the potential of artificial intelligence — but overwhelmingly doubt the NHS’s current ability to integrate those tools in ways that improve care safely and equitably. The headline findings are stark: roughly 70% of surveyed physicians are supportive of AI adoption, yet 68% say the NHS lacks the digital infrastructure to introduce AI at scale and 48% strongly disagree that the NHS is ready to integrate the technology. Those numbers are not academic: they reflect the experience and judgement of practicing clinicians who see everyday friction caused by incompatible Electronic Patient Records (EPRs), out-of-date infrastructure, and inconsistent local governance. The RCP frames the problem as systemic — not a vendor failure or a single procurement misstep — and calls for urgent, coordinated action: modern infrastructure investment, a standard EPR content model, NHS‑approved algorithm registries, and clinician-led AI design and training. Why this matters now
AI tools — especially large language models and specialist diagnostic algorithms — promise measurable benefits in triage, imaging, administrative automation, and decision support. Practical pilots have shown time savings and workflow acceleration in narrowly scoped settings, and government-level trials have been used to justify broader investment in AI-assisted productivity. Yet the NHS is a federated system of trusts and clinical systems; without interoperability, auditability, and governance, pilots cannot scale safely. Several independent analyses reinforce the RCP’s diagnosis: integration with EPRs, data governance, and workforce capability are the most persistent blockers to value at scale. This is not an argument against AI. Rather, it is a call to treat AI adoption as a full-spectrum transformation: technical integration, procurement and contractual rigour, clinical safety cases, workforce training, and patient-facing transparency must all be resolved before AI becomes a routine part of practice.Survey findings: clinician attitudes, behaviours and anxieties
Supportive but sceptical
- 70% of RCP respondents said they are very or somewhat supportive of AI being widely implemented in the NHS. Yet 68% judged the NHS’s digital infrastructure inadequate to deliver AI that will make a real difference for patients — a direct judgment on systems, not on AI itself.
- Only a minority of clinicians use AI tools in clinical practice frequently: about 16% reported daily clinical use, with another 15% weekly. That gap between enthusiasm and practical use is telling: clinicians like the idea of AI but do not yet have reliable, trustable tools in their workflows.
Practical problems clinicians named
- Integration with EPRs — 70% named the inability to integrate AI with Electronic Patient Records and other systems as the biggest barrier. The NHS’s lack of a standardised EPR content model means data and interfaces vary widely between trusts, raising the cost and risk of integrations.
- Workforce skills — 79% said they need training in clinical AI tools; yet 66% reported no access to such training. That training gap leaves clinicians reliant on ad‑hoc learning and consumer AI tools in the absence of sanctioned, validated alternatives.
- Use of personal AI tools — 69% admitted to using non‑NHS AI tools (ChatGPT, Copilot, etc. to answer clinical questions. The RCP flagged this as a safety and governance risk; unsanctioned tools lack audit trails, data protections, and clinical validation.
Top clinician concerns
- Clinical error risk (73%), liability exposures (54%), de‑skilling (52%), model drift (48%) and bias/explainability issues (around 47–48%) were all major concerns listed in the RCP survey. These are real, proven failure modes for current AI systems and drive clinicians’ reluctance to rely on them without safeguards.
Digital foundations: EPRs, interoperability and the integration problem
The EPR fragmentation problem
The RCP describes a fractured EPR landscape: different vendors, different clinical views, different storage models and inconsistent meta-data. That multiplicity makes it expensive, slow and brittle to deploy AI that must read, interpret and write back to patient records safely. In practice, an algorithm that performs well in a single trust often needs repeat validation and heavy integration work before it can function elsewhere. Why this breaks AI deployments:- AI needs structured, consistent inputs and clear provenance — inconsistent EPR schemas frustrate retrieval-augmented models and increase hallucination risk.
- Integration work multiplies procurement and technical effort, slowing down time to benefit and raising total cost of ownership.
- Different UXs and data views across trusts increase clinician cognitive load, undermining trust and adoption.
The RCP prescription
The RCP calls for a mandated EPR content model and national standardisation to reduce translation layers between systems. It also recommends a centralised bank of NHS‑approved algorithms and patient apps, so clinicians can choose validated tools that are certified against common standards. These are not creative ideas; they reflect the only practical route to reduce per‑site integration cost and provide consistent, auditable clinical performance.Governance, regulation and patient safety
Regulation: missing, slow, or unclear
More than a third of clinicians identified the absence of clear regulation as a major obstacle. The regulatory landscape is complex: devices that influence clinical decisions may be regulated by the MHRA as medical devices; conversational assistants that produce clinical text create ambiguous boundaries between admin and clinical outputs. Clinicians and vendors both need legal clarity on when a product becomes a regulated clinical tool and what evidence and post‑market surveillance are required.Clinical safety and the human‑in‑the‑loop
The RCP stresses that AI must augment, not replace, clinical judgement. That means every clinically impactful AI output needs:- a documented clinical safety case,
- mandatory human verification before an AI draft enters the legal medical record,
- immutable audit trails recording prompts, model version and the person who verified the output.
Cybersecurity and data protection
Scaling AI increases attack surface and data‑flow complexity. The RCP and independent commentators recommend:- robust encryption, access control and identity management,
- clear contractual limits on vendor secondary use of NHS data,
- tenant isolation and exportable telemetry for independent audit.
Workforce readiness: training, culture and change management
The training gap
Nearly eight in ten clinicians say they need training in clinical AI tools; two in three say they don’t have access to it. That mismatch is a structural blocker: without role‑specific, clinically focused education and protected time to learn, clinicians will either (a) fall back to consumer tools with unknown safety properties, or (b) underutilise validated systems and leave value on the table.Practical training and change management checklist
- Define competencies by role (what a registrar needs to know vs an admin clerk).
- Integrate AI into CPD and undergraduate curricula with practical assessments.
- Fund protected learning time and simulation environments.
- Provide on‑call clinical informatics support during initial rollouts.
- Monitor and publish usage metrics and correction rates.
Where AI is already helping — and where over‑confidence is dangerous
Positive, evidence-based wins
- Radiology and pathology: well‑defined image analysis tasks have seen multiple CE‑marked/regulated algorithms provide measurable diagnostic support in pilot sites. These systems succeed because inputs are structured and tasks are narrow and measurable.
- Endoscopy / Gastroenterology: AI-assisted polyp detection in colonoscopy has improved adenoma detection rates in focused trials. The constrained nature of the task and the immediate, verifiable ground truth make this an ideal clinical use-case.
- Administrative automation: embedded assistants in productivity suites can reduce routine drafting and summarisation time in admin teams — pilots report non-trivial time savings when human verification is built into the workflow. These are early, empirical productivity wins that the NHS can scale more rapidly if governance and technical baselines are assured.
Risks where AI can cause harm
- Hallucinations and omissions: LLMs can fabricate plausible statements or omit critical clinical facts; this is dangerous if outputs are automated into records or used to make treatment decisions without vetting.
- Over‑reliance and deskilling: repeated dependence on AI for routine tasks can erode clinician skills and reduce the rigour of clinical review. The RCP highlights de‑skilling as a top concern.
- Bias and inequity: models trained on unrepresentative datasets can perform worse for minority or disadvantaged groups, worsening health inequalities if left untested and unmitigated.
The RCP’s recommendations — and pragmatic priorities for CIOs
The RCP makes a set of clear, actionable asks for government and the NHS; these map closely to operational priorities for technology leaders:- Invest in modern digital infrastructure and fix the basics (networking, device refresh, reliable Wi‑Fi).
- Mandate an EPR content model so records are standardised and interoperable.
- Create a central registry or bank of NHS‑approved algorithms and patient apps that meet national standards.
- Deliver regulation and guidance for safe AI use in healthcare.
- Involve clinicians and patients from the outset in AI design and deployment.
- Embed AI competencies into training and CPD.
A four-step roadmap CIOs can act on now
- Scope low-risk pilots — prioritize admin workflows and high-volume, clearly bounded tasks where verification overhead is small.
- Enforce human-in-the-loop — require sign-off metadata and immutable logging before outputs are appended to records.
- Instrument outcomes — combine telemetry, time‑and‑motion studies and independent audits to measure net benefits (including verification time).
- Build procurement guardrails — require vendor transparency on data usage, exportable logs, model update notification, and strong SLAs for security and incident response.
Technical checklist for successful, safe integration
- Standardised EPR schema and shared terminology.
- API-first interoperability (FHIR or equivalent) with versioned data contracts.
- Tenant‑isolated, auditable inference endpoints with logging of prompts, outputs and model versions.
- Role-based access control, SSO and robust identity management.
- DLP and privacy-preserving telemetry for usage analytics.
- Clinical safety cases and post‑market surveillance for any AI that affects care.
- Regular adversarial testing (red‑teaming) for hallucinations and bias.
Legal, ethical and public trust implications
Deploying AI in the NHS is as much a political and ethical challenge as it is a technical one. The public expects patient records to remain confidential and clinicians to be fully accountable for care decisions. Any perception that the NHS is using unvetted commercial models — or that patient data are being recycled into private model training without clear safeguards — will rapidly erode public trust.Legal accountability must be clarified: clinicians will need clear rules about liability when AI outputs contribute to decisions, and procurement must ensure vendors accept contractual obligations around auditability and data protection. Clear, published incident reporting and transparent independent audits will be critical to maintaining legitimacy.
What to watch for: red flags that mean a rollout is premature
- Deployment without a clinical safety case or without mandatory clinician verification.
- Use of consumer AI endpoints for patient-identifiable content.
- Vendor contracts that allow undisclosed secondary use of NHS data for model training.
- Lack of exportable logs and telemetry for independent audit.
- No plan to measure net time savings including verification and rework.
Conclusion — a practical, urgent programme to make the NHS AI-ready
The RCP’s verdict is unambiguous: clinicians support AI in principle but do not trust the NHS’s current digital foundations to deliver it safely. That judgment matters because the NHS cannot safely or equitably reap AI’s benefits while its EPRs, procurement, training and governance remain fragmented.The path forward is clear and practical: fix the digital basics (EPR standardisation, reliable infrastructure), mandate clinical safety and human-in-the-loop processes, create an NHS‑approved registry of validated tools, and invest seriously in clinician training and independent evaluation. When these elements are in place — not before — AI can move from pilot promise to scale‑safe benefit, reducing administrative burden, speeding diagnoses in well‑defined tasks, and improving patient outcomes.
Policymakers and NHS leaders now face a choice: move quickly to shore up the foundations the RCP has flagged, or risk large-scale AI deployments that are expensive, fragile and potentially unsafe. The clinician community’s message is clear: the technology is promising, but the NHS’s digital foundations are broken — and they must be repaired before AI can deliver on its promise.
Source: UKAuthority NHS not ready for AI - Royal College of Physicians | UKAuthority