Barristers in England and Wales have been issued refreshed guidance on the use of ChatGPT and other large language model (LLM) tools after a series of High Court rulings exposed lawyers putting non‑existent authorities before the court, and regulators and judges signalled that professional responsibility does not disappear when using generative AI.
The Bar Council’s updated guidance, published in late November 2025, responds to a rapid rise in reported incidents where legal filings — drafted or assisted by generative AI tools — contained fabricated case citations or mischaracterised authorities. The guidance urges barristers to understand how systems such as Google’s Gemini, Perplexity, Harvey and Microsoft Copilot operate, and to keep meaningful human oversight over any outputs used in pleadings, advice or oral submissions. It stresses that barristers retain ultimate responsibility for accuracy, confidentiality and compliance with the profession’s rules. The immediate spark for the refreshed document was a set of High Court decisions in 2025 that exposed five entirely fictitious cases being cited in a judicial review pleading and identified dozens of other unreliable authorities presented to courts in related matters. Mr Justice Ritchie and later the Divisional Court described the conduct in the underlying proceedings as constituting serious professional failings, ordered wasted‑costs payments and referred the lawyers involved to their regulators. The court expressly warned that readily available generative AI tools are “not capable of conducting reliable legal research” without rigorous verification. This is not an England‑only problem. Courts and tribunals in other jurisdictions, and even prosecutors and major law firms, have reported instances where generative AI produced plausible‑looking but non‑existent citations or mischaracterised case law — prompting fines, publicity and internal policy changes. In the United States, federal judges and state courts have confronted briefs containing AI‑invented precedents; a prominent U.S. law firm recently avoided sanctions after acknowledging that an associate had relied on AI that generated false citations.
Key elements of the guidance include:
Yet guidance alone will not eliminate the risk of fabricated authorities or other hallucination‑driven harms. The profession needs coordinated investment in verification tools, clearer regulatory rules on disclosure and stronger vendor commitments to provenance and data governance. Until then, the safest course for barristers is straightforward: treat LLM outputs as drafts or research aids, not as finished authorities; verify relentlessly; document decisions; and never let the convenience of a chat window substitute for the time‑honoured duty to check the law.
Source: Legal Cheek Barristers given fresh AI guidance amid rise in fake cases cited in court - Legal Cheek
Background
The Bar Council’s updated guidance, published in late November 2025, responds to a rapid rise in reported incidents where legal filings — drafted or assisted by generative AI tools — contained fabricated case citations or mischaracterised authorities. The guidance urges barristers to understand how systems such as Google’s Gemini, Perplexity, Harvey and Microsoft Copilot operate, and to keep meaningful human oversight over any outputs used in pleadings, advice or oral submissions. It stresses that barristers retain ultimate responsibility for accuracy, confidentiality and compliance with the profession’s rules. The immediate spark for the refreshed document was a set of High Court decisions in 2025 that exposed five entirely fictitious cases being cited in a judicial review pleading and identified dozens of other unreliable authorities presented to courts in related matters. Mr Justice Ritchie and later the Divisional Court described the conduct in the underlying proceedings as constituting serious professional failings, ordered wasted‑costs payments and referred the lawyers involved to their regulators. The court expressly warned that readily available generative AI tools are “not capable of conducting reliable legal research” without rigorous verification. This is not an England‑only problem. Courts and tribunals in other jurisdictions, and even prosecutors and major law firms, have reported instances where generative AI produced plausible‑looking but non‑existent citations or mischaracterised case law — prompting fines, publicity and internal policy changes. In the United States, federal judges and state courts have confronted briefs containing AI‑invented precedents; a prominent U.S. law firm recently avoided sanctions after acknowledging that an associate had relied on AI that generated false citations. What the Bar Council guidance says — the essentials
The Bar Council’s resource is practical and risk‑focused rather than technocratic. It does not ban the use of LLMs; it sets out guardrails that place the burden of verification squarely on the human practitioner.Key elements of the guidance include:
- A clear statement of responsibility: barristers remain responsible for the accuracy, confidentiality and ethical compliance of any material they put before the court, regardless of whether an AI tool assisted in producing it.
- Named systems and risk categories: the guidance explicitly mentions Gemini, Perplexity, Harvey and Microsoft Copilot and warns about core LLM risks — hallucination, anthropomorphism (treating the model as a sentient adviser), information disorder, data‑training bias, and cybersecurity vulnerabilities.
- Confidentiality and data protection: barristers are told to preserve client confidentiality and be mindful that some public chatbots may add user inputs to provider training data unless contractual protections exist.
- Reference to academic reliability assessments: the guidance notes recent academic research questioning the reliability of AI legal research tools and directs practitioners to established, authoritative legal resources such as the Inns of Court libraries.
- Non‑binding status but regulatory horizon: the guidance is framed as professional advice (not formal disciplinary rules) but is explicitly linked to the Bar Standards Board’s joint working group and the prospect of further training and supervisory measures.
The High Court wake‑up calls: what happened and what courts said
Two strands of High Court commentary crystallised the profession’s unease in 2025.- The Ayinde litigation: in judicial review proceedings concerning a homelessness claimant, a junior barrister’s pleading included five authorities that did not exist; the judge ordered wasted costs, severely criticised the conduct, and sent the transcript to professional regulators. The judgment condemned the submission of fabricated authorities as “appalling professional misbehaviour” and made clear that mere reliance on an external tool would not excuse a failure to verify. The court did not make an explicit factual finding that AI generated the citations, but considered that either deliberate fabrication or unverified use of generative AI were both possibilities that merited regulatory scrutiny.
- Parallel examples and referrals: related judgments recorded larger patterns of unreliable authorities being submitted across cases — in one reported matter, dozens of listed authorities were later found to be either fictitious or irrelevant. The Divisional Court listed hearings to consider whether further steps, including contempt proceedings, were necessary. Judges emphasised that the administration of justice depends on the court being able to rely “without question” on the integrity of those who appear before it.
Why LLMs hallucinate and what that means for legal work
LLMs generate outputs by predicting likely continuations of text based on patterns in training data; they are not retrieval engines that guarantee traceable provenance to primary sources. The technical consequence is hallucination: fluent, authoritative‑sounding statements that are not grounded in any actual authority. Empirical work and legal‑sector studies show hallucination rates can be substantial in targeted legal queries, and researchers have documented numerous real‑world incidents where fabricated cases or misattributed holdings were produced by commercial chatbots. The practical implications for courtroom work are stark:- A model may invent a plausible case name, court citation and summary, all of which will look credible to a human reader unless checked.
- Even when the model does point to a real case, its summary or the proposition it ascribes to that case may be inaccurate or out of context.
- Public chatbots may store or use inputs in ways that risk client confidentiality unless specific contractual controls apply.
Global pattern: not just an English problem
The problem of AI‑generated fake citations and misleading submissions has been recorded in multiple jurisdictions. In the United States, investigative reporting and court filings documented dozens of instances where briefs contained non‑existent precedents; one federal judge in Oregon declined to impose sanctions after a large U.S. firm remedied an AI‑generated error, while other judges have issued fines and rebukes. A U.S. prosecutor’s office recently withdrew a motion after discovering inaccurate legal references that the office attributed to AI assistance, prompting internal training and an AI policy. These events demonstrate a shared global challenge: courts are confronting outputs that appear authoritative but lack reliable provenance.Practical implications for barristers and chambers
The Bar Council guidance is actionable and aligns with emerging best practices. The following is a condensed operational checklist for chambers, practice groups and individual barristers that synthesises the guidance and judicial expectations.- Maintain an audit trail: record when and how any AI tool was used, including prompts, outputs, and the person who reviewed them.
- Verify every authority: any case citation, statute or statutory instrument found via an LLM must be independently checked against primary legal databases (e.g., official law reports, national legislation repositories).
- Avoid pasting confidential client material into public chatbots: restrict client‑sensitive inputs to enterprise tools with clear contractual data‑use terms or to locally hosted systems under the chamber’s control.
- Train and supervise junior advocates: ensure pupils and junior barristers understand the limits of LLMs and cannot submit documents without senior verification.
- Use LLMs for desk‑tasks, not final authority: prefer LLM use for summarisation, note‑taking, idea generation and redrafting, but not as a substitute for legal research or authority collection.
- Insert “AI use” notes into case files or disclosure where appropriate and ensure client consent where necessary.
- Adopt vendor‑due diligence: when procuring AI, require provenance guarantees, explainability features, and contractual safeguards for IP and data protection.
Ethical and regulatory consequences
Judicial rulings in 2025 make clear that the legal profession’s ethical framework — duties of candour to the court, competence, and confidentiality — were drafted in a pre‑AI world but remain fully applicable. A lawyer who files a court document containing fabricated material faces multiple possible outcomes:- Civil consequences: wasted costs orders or fines for conduct that has caused an opposing party unnecessary expense.
- Regulatory consequences: referral to professional regulators (Bar Standards Board, Solicitors Regulation Authority) and potential disciplinary proceedings.
- Criminal risk (edge case): where fabrication or a false statement risks interfering with the administration of justice, courts have kept contempt powers under consideration.
Technical mitigation and vendor responsibilities
The legal sector’s long‑term safety depends on improvements in model transparency, provenance and retrieval‑augmented approaches. Technical avenues that reduce risk include:- Citation and provenance features: systems that return verifiable links to primary sources, including exact paragraph identifiers and persistent identifiers, make automated outputs testable.
- Retrieval‑augmented generation (RAG): coupling a model’s generative ability with controlled, authoritative databases reduces hallucination by forcing models to ground outputs in retrieved documents.
- Abstention mechanisms: models designed to decline to answer when data is insufficient or when prompts require legal judgement beyond the model’s scope.
- Multi‑agent verification: architectures that use separate agents to retrieve, verify and cross‑check claims before producing a final answer. Recent academic work demonstrates that reflective, multi‑agent frameworks can materially reduce hallucination and improve abstention on legal queries.
Critical analysis: strengths and limitations of the Bar Council’s approach
The Bar Council’s updated guidance is a necessary and timely intervention. Its strengths include:- Practical focus: it recognises the inevitability of AI adoption and provides a realistic, risk‑based framework rather than blanket prohibition.
- Emphasis on professional duty: by stressing human responsibility, it aligns with judicial expectations and sets clear ethical benchmarks.
- Cross‑sector awareness: the guidance references both academic reliability work and existing court directions, situating barristers within a broader ecosystem of AI governance.
- Non‑binding character: as a piece of professional advice (not a regulatory rule), compliance depends on voluntary adoption. Without mandatory reporting or standardised verification tools, uneven implementation across chambers is likely.
- Verification burden but few tools: the guidance imposes a heavy verification and audit burden on practitioners, yet does not provide a standard, affordable toolkit for smaller chambers to reliably validate outputs. This gap leaves sole practitioners and small clerks exposed.
- Attribution ambiguity: courts have so far declined to make definitive findings that AI actually caused the fabrication in every case; this legal ambiguity complicates how regulators attribute blame between human and machine. Until technical forensics and vendor transparency improve, attribution will remain contested.
Recommendations for chambers, regulators and vendors
- Chambers and practice groups should adopt written AI policies that require: (a) mandatory verification of authorities, (b) logging of AI use, (c) senior sign‑off on any court filing, and (d) pupil and junior training modules.
- Regulators should consider introducing a minimum AI‑use declaration for court filings where an LLM materially contributes to research or drafting, to create transparency and allow courts to make informed decisions. This could be a short note in the court bundle rather than a punitive disclosure.
- Vendors should implement provable provenance, RAG integration with primary legal databases and contractual controls to prevent public‑facing chat sessions from entering training corpora. Open auditing standards for legal LLM outputs would help the market.
- The profession should invest in shared, low‑cost verification tooling: a federated, subscription‑based “AI safety net” that cross‑checks citations against major law reports and flags discrepancies. Such shared infrastructure would reduce the verification burden on smaller practices.
What to watch next
- Whether regulators make parts of the guidance mandatory or introduce reporting obligations for AI‑assisted submissions.
- Vendor responses: improved provenance, enterprise‑grade legal LLMs and contractual changes that protect client data.
- Judicial practice: whether courts will require formal explanations when contested filings involve AI‑assisted research, or whether the current mix of wasted‑costs orders and referrals remains the norm.
Conclusion
The Bar Council’s updated guidance marks a pragmatic, profession‑led attempt to square the undeniable productivity benefits of LLMs with the immutable duties of advocacy: competence, candour and confidentiality. The document places the verification burden where courts already expect it to lie — with the human advocate — while acknowledging that technology is changing the mechanics of legal work.Yet guidance alone will not eliminate the risk of fabricated authorities or other hallucination‑driven harms. The profession needs coordinated investment in verification tools, clearer regulatory rules on disclosure and stronger vendor commitments to provenance and data governance. Until then, the safest course for barristers is straightforward: treat LLM outputs as drafts or research aids, not as finished authorities; verify relentlessly; document decisions; and never let the convenience of a chat window substitute for the time‑honoured duty to check the law.
Source: Legal Cheek Barristers given fresh AI guidance amid rise in fake cases cited in court - Legal Cheek