Waitaki District Council’s cautious, staff-first rollout of generative AI shows how a small public body can harness large language models for practical gains—while keeping human judgment, privacy safeguards and governance firmly in the driver’s seat. The council’s chief digital officer, Teresa McCallum, has led a targeted programme that introduced Microsoft Copilot inside the council’s Microsoft environment and used Anthropic’s Claude for specific tasks such as thematic analysis of consultation submissions. That practical focus—summarising thousands of Long-Term Plan submissions, converting technical material into plain English, and extracting spreadsheet insights—illustrates the tidy, high-value uses of AI for local government. At the same time the council has set hard boundaries: human sign-off on any AI output, ring-fenced licences that do not feed municipal data back into vendor training, and a cross-functional AI Governance Group to monitor use and risk. (waitaki.govt.nz)
Waitaki’s pivot to AI came with a leadership change: a new chief digital officer who brought an enterprise-first view of how large language models (LLMs) can help routine council work. That context matters—local government operates on tight budgets, with episodic spikes of public input (for example, long-term planning consultations) that create intense short-term workloads. Applying AI to tasks such as document summarisation, theming of submissions, and plain‑language conversion can free scarce staff time for more strategic, face-to-face community work while reducing turnaround time on public-facing outputs. The council’s own transformation materials emphasise using digital tools to improve services without replacing the human contact residents expect. (waitaki.govt.nz)
Why this is notable
Limitations to remember
Practical implication: using Claude under an enterprise contract aligns with Waitaki’s ring‑fenced approach, but councils must confirm the exact contractual clauses on training, retention and exportability of logs before operationalising high‑risk uses. (support.anthropic.com)
Caveats and checks:
Conclusion
Waitaki’s measured approach demonstrates an effective blueprint for municipal AI adoption: pick clear tasks, insist on enterprise-grade privacy protections, keep humans in charge, and publish governance arrangements. In doing so, small councils can capture productivity gains while protecting the public interest—finding a practical, democratically legitimate way to add a little more human attention to the work AI helps accelerate.
Source: Otago Daily Times Finding the human touch with AI
Background: why a rural council is thinking big about AI
Waitaki’s pivot to AI came with a leadership change: a new chief digital officer who brought an enterprise-first view of how large language models (LLMs) can help routine council work. That context matters—local government operates on tight budgets, with episodic spikes of public input (for example, long-term planning consultations) that create intense short-term workloads. Applying AI to tasks such as document summarisation, theming of submissions, and plain‑language conversion can free scarce staff time for more strategic, face-to-face community work while reducing turnaround time on public-facing outputs. The council’s own transformation materials emphasise using digital tools to improve services without replacing the human contact residents expect. (waitaki.govt.nz)Why this is notable
- Small and medium-sized councils typically lack specialist data-science teams; targeted AI pilots that sit inside existing productivity tools lower the technical and procurement overhead.
- Embedding LLMs in a known vendor environment (Microsoft Copilot inside Microsoft 365) simplifies governance because data flows and contractual terms are easier to map than when staff use a mix of consumer tools.
- Using a separate model (Claude) for high-volume thematic analysis allowed Waitaki to pick the right tool for the job rather than forcing a single vendor to do everything.
What Waitaki did — the practical rollout, in plain language
Starting with one platform and proving value
Waitaki began by rolling out Copilot inside its Microsoft estate. Copilot integrates with Word, Excel, PowerPoint and Teams, so it can:- Turn first‑draft content into polished outputs;
- Summarise long documents or meeting transcripts;
- Help generate slide decks and convert technical jargon into accessible text;
- Assist data analysis in Excel for quicker insights.
Tackling a workload spike: LTP submissions as a use case
When the Long-Term Plan (LTP) drew hundreds more submissions than previously expected, the council used Claude to run thematic analysis across the mass of public input. Staff then validated the AI’s categories and themes before any summaries were included in briefing papers. According to the council’s account, the AI’s categorisations were “very, very accurate,” and the automated theming saved staff hours of repetitive reading—while preserving the human checks needed for democratic legitimacy. This kind of hybrid workflow—AI to surface structure, humans to verify nuance—represents the emerging best practice for public consultations.Governance, licences and cost discipline
The council adopted an explicit AI policy, issued a ring‑fenced set of licences (reportedly 52), and established a cross‑functional AI Governance Group to oversee safe usage, steer pilots and prevent unauthorised consumer tool use. The council reported the licences cost an annual total — a number it compared to the salary cost of an additional staff member to argue fiscal prudence. That price-versus-staff comparison is useful as a planning benchmark, but it should be treated as an internal procurement figure rather than an industry standard—prices and licensing models vary widely by vendor, contract terms and scope of use. Where precise financials are quoted in press accounts, independent verification with procurement documents or council budget papers is recommended.The governance foundations: human-in-the-loop, DPIAs and transparency
Human oversight is non‑negotiable
Waitaki’s approach places a named human reviewer at the end of every AI-generated product used for decision-making. This human-in-the-loop rule addresses the two most immediate risks of LLMs in public administration: hallucination (confident but incorrect output) and loss of provenance (not being able to trace which original comments were summarised). Councils that have published guidance on Copilot emphasise the same requirement: AI should generate drafts and options, not final decisions. (buckinghamshire.gov.uk)Data protection impact assessments (DPIAs) and procurement clauses
Best practice for public sector AI includes:- Running DPIAs before using AI on public comments or citizen data;
- Ensuring procurement contracts include non‑training clauses (no reuse of submitted data for vendor model training unless explicitly allowed) and deletion/audit rights;
- Logging prompts and AI outputs used in decision-making so outputs are auditable and can be examined under freedom-of-information rules.
Transparency to maintain public trust
Where AI contributes to planning papers, the emerging norm is to publish a short assurance statement listing what the AI did, who reviewed it, and how the council validated the results. For high‑stakes decisions (planning approvals, allocation of scarce public assets), councils should make the AI audit trail available—either as part of the committee papers or on request—so citizens and journalists can check the underlying sample of original submissions against the AI summary. This reduces the “black box” perception and improves democratic legitimacy.Vendor realities: what Copilot and Claude really promise—and their limits
Microsoft Copilot: enterprise controls and data assurances
Microsoft’s enterprise Copilot offering is designed so customer data used inside a commercial tenant is not used to train Microsoft’s public foundation models, unless explicit opt-in or agreement exists. Microsoft’s documentation and public statements stress separation between commercial tenant data and the datasets used to train foundation LLMs; enterprise customers can also control whether conversational telemetry is used to improve broader product features. These controls make Copilot a natural fit for organisations that want cloud-first productivity gains without exposing internal documents to vendor training pipelines. (support.microsoft.com)Limitations to remember
- Copilot and other LLM assistants still hallucinate; Microsoft warns against using generated outputs where strict accuracy and reproducibility are required (e.g., legal or financial reporting) without independent verification.
- Telemetry, metadata and opt‑in settings differ across consumer and enterprise tiers—councils should treat vendor FAQs as starting points and demand contractual guarantees during procurement. (pcgamer.com)
Anthropic Claude: commercial vs consumer promises
Anthropic’s corporate (Claude for Work / Enterprise) policies state that commercial customers’ inputs are not used to train the company’s foundation models, and Anthropic will act as a processor on behalf of the customer. That makes Claude a reasonable choice for councils when contracted under enterprise terms. Consumer versions of Claude have different default settings around data use, and recent vendor communications show ongoing updates to opt‑in/opt‑out regimes—so confirming which product tier is in use matters. (docs.anthropic.com)Practical implication: using Claude under an enterprise contract aligns with Waitaki’s ring‑fenced approach, but councils must confirm the exact contractual clauses on training, retention and exportability of logs before operationalising high‑risk uses. (support.anthropic.com)
Strengths of Waitaki’s approach — what other councils should notice
- Task-focused pilots: The council didn’t try to bolt AI onto everything. It picked high-frequency, repetitive tasks with clearly measurable outputs (submission theming, plain-language conversion, spreadsheet analysis), creating early credibility for the programme.
- Vendor pragmatism: Using Copilot inside Microsoft 365 for document workflows and a separate LLM for bulk thematic analysis shows healthy tool selection driven by fit-for-purpose considerations rather than vendor monoculture.
- Cost-conscious licensing: The council compared licence costs to staff costs and framed the investment as an efficiency lever. This cost-to-benefit framing helps elected members and ratepayers understand the business case—so long as figures are transparent in budget papers.
- Governance and skills: The formation of an AI Governance Group and mandatory staff checks before use built the cultural and procedural scaffolding needed for safe scaling. Those bodies also create an institutional memory for later procurement decisions.
Risks and unresolved questions — where caution remains essential
1) Accuracy and auditability
LLMs produce fluent outputs that may mix fact and invention. When summaries feed into policy papers or committee reports, even small errors can change interpretation and outcomes. Councils must publish sample checks and maintain the raw data behind any automated aggregation so councillors and the public can verify claims. If AI-derived summaries cannot be fully audited back to representative original submissions, trust will erode quickly.2) Data-handling nuances
Vendor FAQs and white papers are helpful but not definitive procurement safeguards. Councils should not presume that an FAQ equals contractual protection. Procurement should demand:- Explicit non-training clauses for customer data (or clearly specified opt-ins);
- Data residence and export guarantees;
- Audit and deletion rights;
- Defined retention periods for logs and conversation history. (blogs.microsoft.com)
3) Bias, representativeness and weighting rules
AI summarisation that relies on raw frequency counts risks privileging volume over substance—e.g., mass template submissions might drown out unique technical comments from statutory consultees. Councils must calibrate weighting rules, explicitly flag technical submissions, and preserve access to representative underlying material so decision-makers can interrogate the summary. This is not an AI-only issue; it’s a design choice that must be transparent.4) Public communication and consent
Even where AI is used only as an internal acceleration tool, the public should be told in plain English:- that AI assisted with theming or summarisation,
- what safeguards were applied,
- who approved the final wording,
- and how to request corrections or see the audit trail.
A practical checklist for councils considering the same path
- Establish an AI Governance Group with cross-functional membership (IT, legal, communications, service leads).
- Pick one or two pilot tasks with clear metrics: time saved, accuracy targets, and auditability measures.
- Use enterprise-grade vendor offerings and insist on contractual non‑training clauses for public-data uses.
- Run DPIAs and publish a short public statement (what AI did, who reviewed it, how errors are corrected).
- Require human sign-off for any AI-derived text that informs decisions or public communications.
- Keep an audit trail of prompts, raw AI outputs and the final human‑edited documents.
- Invest in staff training and a champion program to build adoption and literacy. (local.gov.uk)
How Waitaki’s experience fits the wider municipal picture
Waitaki’s “fast follower” posture—adopting established enterprise tools, sharing policy templates with other councils, and joining cross-council conversations—mirrors broader, sensible practice across the UK and elsewhere. Early adopters such as Buckinghamshire Council have already demonstrated measurable time-savings in contact centres, while emphasising the need for governance and measurement programs. These examples teach two complementary lessons:- AI can create real, measurable efficiencies when applied to high-volume knowledge tasks, and
- Those gains must be accompanied by governance frameworks that preserve due process and public trust. (buckinghamshire.gov.uk)
Final analysis: human judgement, not hype
Waitaki District Council’s rollout is a useful model for small public bodies: targeted pilots, vendor selection tied to contractual privacy assurances, ring‑fenced licences, a cross-functional governance body, and mandatory human review. Those elements combine to deliver real operational value without ceding control of public data or decision-making authority to opaque systems.Caveats and checks:
- Any cost comparisons (licence bundles vs staff cost) quoted in news reports should be reconciled with procurement documentation before being treated as a sector benchmark.
- Vendor FAQs and marketing materials are necessary context but are not substitutes for contractual guarantees on training, retention and audit rights.
Conclusion
Waitaki’s measured approach demonstrates an effective blueprint for municipal AI adoption: pick clear tasks, insist on enterprise-grade privacy protections, keep humans in charge, and publish governance arrangements. In doing so, small councils can capture productivity gains while protecting the public interest—finding a practical, democratically legitimate way to add a little more human attention to the work AI helps accelerate.
Source: Otago Daily Times Finding the human touch with AI