• Thread Author
Bath and North East Somerset Council used Microsoft Copilot to read and summarise more than 5,500 public comments submitted on the planning application for an 18,000‑seat stadium at the Recreation Ground — a development that has reignited debates about AI in local government, public consultation integrity, heritage protection and how councils process mass responses in major planning decisions. (bathvoice.co.uk)

Executive at a holographic desk reviews urban planning topics with a team against a historic cityscape.Background / Overview​

Bath Rugby’s proposals for a permanent, year‑round 18,000‑seat stadium on the Recreation Ground (the Rec) have been through several iterations over the last three years and remain politically and legally sensitive because the Rec sits in the heart of Bath’s UNESCO World Heritage setting. The club’s resubmitted plans reduce some heights, redesign facades and aim to regenerate the riverside while increasing match‑day and non‑match‑day activity. National and local statutory consultees, campaign groups and thousands of citizens have responded during the consultation periods. (bbc.com)
According to recent reporting, B&NES received 5,590 representations during the latest consultation on the revised stadium plan, of which a substantial majority registered support. The council’s planning officer produced a 121‑page report for the planning committee and — as the planning officer’s report itself states — used Microsoft Copilot to review and summarise representations submitted via the council’s online comment form; the officer then reviewed samples and edited the AI’s output as part of the case review. This procedural detail, explicitly citing Microsoft Copilot, is the single most consequential AI‑related claim in the public record for this application. (bathvoice.co.uk)

Why this matters: public consultation, scale and scrutiny​

Public consultations are central to planning democracy: they allow local residents, stakeholders and statutory consultees to flag material planning considerations — such as heritage impacts, highways, biodiversity, flood risk and noise — for decision‑makers to evaluate. When thousands of comments arrive, councils must summarise representations in a way that fairly captures the range, weight and substance of objections and support.
  • Large volumes of input create capacity challenges for planning teams that are already stretched.
  • Summary reports influence councillors, statutory consultees and the public narrative about what the community feels — and therefore shape the perceived legitimacy of decisions.
  • How summaries are produced matters: accuracy, neutrality and traceability are core principles of fair public engagement.
In this case, the council’s decision to use an AI summarisation tool for the bulk of submissions sits at the intersection of operational necessity and democratic transparency. It raises immediate questions about whether an AI‑assisted approach preserves fidelity to the original comments and whether the process was documented and auditable for councillors, journalists and campaigners. (bathecho.co.uk)

What the council and the planning report reportedly did​

The planning officer’s report — which was circulated to councillors ahead of the committee meeting — reportedly states that Copilot reviewed all comments submitted through the online form and produced topics and reasons for objection/support. The officer then refined those AI‑generated topics after spot‑checks of a sample of comments. Representations sent directly to the case officer (outside the online form) were also read individually and the text was summarised using Copilot. The full set of individual comments remains available on the council’s public planning portal for anyone who wants to read the original submissions. (bathvoice.co.uk)
The headline numbers reported in council documents and local reporting are:
  • Total representations: ~5,590
  • In support: ~5,086
  • Objecting: ~368
  • Neither/categorised differently: ~136
It’s important to stress that the numerical result does not determine the planning outcome: planning committee members must weigh material planning considerations — i.e., the planning merits or harms — rather than simply counting supporters and objectors. Nevertheless, these totals and the way they are summarised frame the committee’s perception of public opinion and therefore matter. (bathecho.co.uk)

Verifying the claim: what is corroborated, and what remains uncertain​

What is corroborated
  • Multiple independent outlets confirm the scale of public engagement and that a planning committee meeting was scheduled to consider the application. The presence of objections from high‑profile local figures and conservation bodies (including ICOMOS‑UK raising heritage concerns) has been independently reported. (bbc.com)
  • Local reporting (including the article summarising the planning officer’s report) records that the officer’s written report referenced Microsoft Copilot as the tool used to identify reasons for objection/support. That text appears to be a direct quotation or paraphrase of the planning report in circulation. (bathvoice.co.uk)
What remains partially or wholly unverifiable in publicly available sources
  • The council’s own planning portal and hosted planning documents are the primary evidence for the claim that Copilot was used. At the time of writing, mainstream outlets report the planning officer’s wording as the basis for the Copilot claim, but the full planning officer’s report (the 121‑page PDF) was either behind a JavaScript‑rendered council portal or not trivially accessible to automated search, making independent extraction of the exact report text more difficult without directly downloading the planning file from the council’s planning portal. Because the planning file itself is the definitive source, readers and councillors should be encouraged to consult the full committee report on the council’s planning portal to verify the precise wording and context of the Copilot reference. (bathnes.gov.uk)
Cautionary note: until the committee report is manually inspected (or a direct download of the uploaded report is retrieved), the strongest, verifiable public claim about Copilot usage rests on the journalist’s report of the planning officer’s document. That is a high‑quality local report, but due diligence requires consulting the council’s published committee papers for complete confirmation. (bathvoice.co.uk)

Strengths — operational, pragmatic benefits of AI in this context​

  • Efficient handling of high volumes of text
  • Automated summarisation can substantially reduce the time planners spend triaging thousands of repetitive or short comments, freeing officers to focus on material planning issues and complex technical responses.
  • For a single application generating thousands of entries, a Copilot‑style assistant can quickly surface common themes (e.g., heritage/UNESCO concerns, transport/parking, flood risk, noise, public access) so that officers can prioritise detailed, human analysis where it matters.
  • Consistent topic extraction and initial coding
  • AI can apply consistent tagging across a dataset, reducing variance introduced by multiple humans applying different labels or interpretations.
  • Tools can generate a first pass which human officers can then refine, producing a hybrid workflow (machine first, human validation second) that balances speed and oversight.
  • Improved traceability if implemented correctly
  • When integrated with proper logging, an AI workflow can produce an auditable trail: which comments were processed, what prompts were used, the AI’s raw outputs, and the human edits applied. That can be valuable for transparency and FOI/appeal responses — but only if the council stores and publishes those artefacts or makes them available on request.
  • Resource relief for underfunded planning teams
  • Many councils operate with reduced staffing; AI assistance promises to tackle high‑volume, repetitive tasks so scarce human time is devoted to specialist evaluation, site visits and cross‑departmental coordination.
These advantages align with how other UK councils have trialled and deployed Microsoft Copilot across back‑office and customer‑facing functions, reporting time savings on drafting, minutes and case summaries. The technology’s mainstreaming in local government makes its use unsurprising, given pressures on capacity. (microsoft.com)

Risks, limitations, and democratic concerns​

  • Hallucination and mis‑summarisation risk
  • Generative AI systems can produce plausible but incorrect text, misattribute themes, or omit minority but materially important points. Summaries that miss or misrepresent a critical planning concern (e.g., a protected species record, or a technical flooding comment with site‑specific evidence) could mislead councillors and distort the planning balance.
  • Loss of nuance and minority voices
  • A single, recurring phrase in many short comments can dominate AI topic extraction and make a rare but technically important submission appear marginal. Minority voices that raise complex or technical planning issues may be under‑weighted if the AI’s heuristics default to frequency over substance.
  • Transparency and auditability shortfall
  • Councils must ensure the chain of custody for public comments is auditable: which raw comments produced what AI‑labelled themes, how a sample review confirmed or contradicted the AI, and what edits the human officer made. Without publication of prompts, AI outputs, or an accessible audit trail, trust is undermined.
  • Privacy and data protection concerns
  • Public comments often include names and addresses; feeding text into any AI system raises questions about data retention, telemetry, and whether the service provider may log or use those inputs for model training. Councils must confirm contractual and technical safeguards — e.g., on‑tenant private instances, no‑training clauses, ephemeral compute and strict access logs — to meet data protection obligations.
  • Procedural fairness and public trust
  • The public expects decision‑making to be transparent. If significant summarisation work is outsourced to AI without clear explanation, opponents may claim the process is opaque or mechanistic, weakening the legitimacy of the committee’s decision in a high‑stakes, emotive local dispute.
  • Vendor, configuration and procurement risks
  • Even when using major vendors, the specifics matter: which Copilot mode, which model, where is it hosted, is on‑device processing used, are there retention policies, and do the council’s contracts forbid downstream training on council data? These are procurement and governance questions that must be answered publicly. Evidence from other councils shows good practice — e.g., DPIAs, governance boards, pilot phases and user training — but practice varies. (microsoft.com)

What good practice would look like (a practical checklist for councils)​

  • Human‑in‑the‑loop as a rule: every AI summary used in a committee paper should be reviewed, annotated and signed off by a named planning officer, with an explicit statement in the paper of what the AI did and how the human checks were performed.
  • Publish the audit trail: make available, ideally via the planning portal or as an appendix to the committee report:
  • the prompts or instructions given to the AI,
  • the AI’s raw summarisation output,
  • the human‑edited final summary,
  • an explanation of the sampling and review methodology.
  • Data protection assurances:
  • publish a DPIA for the use of AI on public comments,
  • describe contractual protections (no‑training clauses, data deletion policies),
  • state retention periods and access controls.
  • Calibrate weighting rules:
  • ensure that frequency of comments is not the sole driver of significance;
  • highlight and separately flag technical submissions from statutory consultees, experts, or those that disclose new material facts.
  • Enable appealability and verification:
  • link each summary heading back to representative original comments in the portal;
  • provide councillors and members of the public with easy access to read a sample of original submissions used to create each theme.
  • Pro‑active public communication:
  • when AI is used, label it clearly in meeting papers and public communications,
  • explain benefits and safeguards to maintain trust.
Implementing these steps would reduce risk while harnessing the operational benefits of AI summarisation.

The Bath case: democratic politics meets digital tools​

In Bath, the stadium debate touches heritage, environmental protection, urban green space, economic livelihoods and civic identity. The intervention by central government ministers — asking the council not to approve the scheme without special authorisation so the Secretary of State can determine whether to call it in — underscores how high the political stakes are. When a planning officer’s report states that an AI tool was used to summarise thousands of comments, that procedural fact becomes a matter of public interest because it relates to how local democracy actually operates: who reads what, how evidence is framed for decision‑makers, and how transparent the process is to residents. (bathecho.co.uk)
The plurality of voices — from local businesses and many supporters arguing for the economic and cultural benefits of keeping Bath Rugby central, to prominent cultural figures and conservationists warning about the loss of open space and visual harm — means the council’s paper has to do more than tally numbers. It must present materially relevant planning points in a manner that councillors can interrogate and test. Therefore, whether AI was used is not merely a technocratic footnote: it sits at the heart of how legitimate, defensible planning decisions are produced and challenged.

Recommendations for journalists, campaigners and councillors​

  • Inspect the committee papers directly: read the planning officer’s appendices, the Council’s statements on AI use (if published), and the raw planning representations on the portal to verify summarisation accuracy.
  • Request the audit trail under FOI if not proactively published: council records of AI prompts, outputs and officer edits are public records relevant to a planning decision and are appropriate subjects for scrutiny.
  • Push for public assurance statements: ask the council to publish a short, plain‑English description of how AI was used, what safeguards were applied and who signed off the final summaries.
  • For councillors: insist on sample checks during the committee — have the officer read out or present the underlying representative comments for the key themes rather than rely solely on a summarised table.
  • For councils considering similar tools: adopt the checklist above before using AI on public representations, and publicise the governance arrangements up front as part of the consultation process.

Conclusion: a pragmatic tool that demands rigorous governance​

Using Microsoft Copilot or similar AI to help process thousands of public comments is an understandable response to a modern operational challenge: local planning services are under resourced, and the civic appetite for contributing to high‑profile applications is very large. In principle, AI can make a beneficial contribution by surfacing common themes and saving officer time. In practice, for a contentious, heritage‑sensitive project like Bath’s Recreation Ground, the legitimacy of the planning process depends on transparency, auditability and human judgement.
The Bath case offers a useful test of public sector AI governance: it highlights both the promise of AI for handling scale and the imperative for councils to document their methods, publish audit trails and ensure councillors and the public have access to the underlying evidence. Where AI is used in democratic processes, good governance is not optional — it is the condition of public trust.

Source: Bath Voice. Bath Voice News: B&NES Council used AI to read public comments on the plans for a stadium in Bath - Bath Voice.
 

Back
Top