South Dublin County Council has insisted it does not use AI to generate automated responses to elected representatives or members of the public — while confirming that Microsoft Copilot is available as a limited content‑creation support tool for staff. The dispute, raised in the Dáil by Dublin Mid‑West TD Mark Ward, highlights a growing fault line in local government: the difference between AI as an assistant and AI as an autonomous responder, and the governance, legal and reputational questions that flow from that distinction.
In January, Deputy Mark Ward told the Dáil — and later confirmed to local reporters — that he had received unusual, sometimes incomprehensible, replies from South Dublin County Council (SDCC) and that, on asking the Council about this, he was told 25 staff members had access to AI and used it to generate content, including responses sent to elected members. SDCC’s official line, relayed through its Director of Digital Services Tommy Kavanagh, has been categorical: staff draft and sign off responses; no automatic outbound replies are generated by AI; and a manager approves all replies before they are sent. The council also states that its use of Copilot sits within the Government’s public‑service AI guidance.
That exchange is a useful case study because it juxtaposes three claims that recur across public sector deployments of generative AI:
Key technical points about Copilot to bear in mind:
Benefits for local authorities include:
Each of these risks is manageable — but only through layered policy, technical controls and continuous training.
But there are gaps in public detail that should concern transparency advocates and councillors alike:
Separately, the EU AI Act — now being implemented across member states — introduces a risk‑based legal framework for AI systems, with new obligations for transparency, data governance and third‑party conformity assessments for higher‑risk systems. While a routine email‑drafting assistant may not always fall into the highest risk categories, failure to disclose AI use or to control sensitive data could create compliance exposures. Public bodies must therefore factor EU law into procurement and deployment decisions.
If South Dublin County Council truly does not automate replies — and instead uses Copilot strictly as a human‑assisted drafting aid with managerial sign‑off — that is consistent with responsible use. The remaining question is whether the operational controls match the assurances. Transparency about the number of users, the scope of the tool, the tenant settings, and retained logs will be the clearest way to resolve the dispute and rebuild trust with elected members and the public. For the wider local government sector, the same lesson applies: embrace the productivity gains — but document, disclose and audit them.
Source: Echo.ie ‘We do not use ‘AI’ to generate automated responses’
Background: the exchange that set off the headlines
In January, Deputy Mark Ward told the Dáil — and later confirmed to local reporters — that he had received unusual, sometimes incomprehensible, replies from South Dublin County Council (SDCC) and that, on asking the Council about this, he was told 25 staff members had access to AI and used it to generate content, including responses sent to elected members. SDCC’s official line, relayed through its Director of Digital Services Tommy Kavanagh, has been categorical: staff draft and sign off responses; no automatic outbound replies are generated by AI; and a manager approves all replies before they are sent. The council also states that its use of Copilot sits within the Government’s public‑service AI guidance.That exchange is a useful case study because it juxtaposes three claims that recur across public sector deployments of generative AI:
- a politician’s experience of receiving content that felt automated or unintelligible;
- an authority’s reassurance that human oversight and managerial sign‑off are in place; and
- the practical reality that staff are using AI tools such as Microsoft Copilot to assist with drafting, aligning tone or speeding content creation.
Overview: what Microsoft Copilot is — and what it isn’t
Microsoft markets Copilot as an AI companion that “helps with your work tasks.” In enterprise and public‑sector deployments Copilot can ingest prompts, search an organisation’s permitted content, and produce prose or summaries in real time. Administrators can confine what Copilot can access — for example, content a user has permission to view — and tenant settings control whether prompts or telemetry are sent outside the organisation’s compliance boundary. But as Microsoft’s documentation makes clear, Copilot’s outputs are model‑generated and can draw on both a model’s general knowledge and the work content available to the user. That combination is powerful — and it is precisely what creates both utility and risk in a local authority context.Key technical points about Copilot to bear in mind:
- Copilot returns answers based on prompts and on the data the user is authorised to query; it does not inherently create a "signed" human reply log unless the organisation enforces that process.
- Copilot can surface real‑time content and, depending on configuration, may incorporate internet‑facing information alongside internal documentation.
- Tenant and regional settings, and the specific Copilot experience (app‑scoped, report‑scoped, etc.), materially affect whether data leaves the organisation’s compliance perimeter.
Why this matters for local government: benefits and immediate use cases
Councils across Ireland and elsewhere are actively experimenting with generative AI to reduce repetitive work and align public messaging. Dublin City Council and other public bodies have publicly explored using AI to increase staff productivity, improve consistency of answers to councillors, and surface internal documents more quickly. The Irish Government has published guidance for responsible use of AI in the public service and recommends human oversight, transparency and strong data governance — precisely the guardrails SDCC says it follows.Benefits for local authorities include:
- faster drafting of routine responses (e.g., FOI acknowledgements, standard service updates);
- quicker synthesis of internal reports and meeting notes into readable summaries;
- improved consistency and corporate alignment (matching tone, policy references and contact points);
- lower administrative time for staff on repetitive composition tasks, allowing more time for casework and complex decisions.
The risks in practice: why “assistive” use still requires scrutiny
The SDCC episode flags four overlapping risk categories that public bodies must manage:1) Hallucinations and nonsense replies
Generative AI can produce confident‑sounding but factually incorrect content — the phenomenon often called hallucination. A councillor who receives an answer that contains invented dates, misquoted policy or irrelevant phrasing may reasonably conclude they were sent an automated or machine‑generated message. That damages trust and complicates casework where precise facts matter. Deputy Ward’s description of “unusual language” fits the classic pattern of a model‑generated answer that was not adequately edited.2) Data governance and privacy leaks
When staff paste constituent casework, personal data, or internal documents into third‑party AI tools, they risk exposing sensitive information to model providers or to systems outside the Council’s control. Microsoft’s Copilot experience includes protections and permission checks — but those protections must be configured correctly and staff must be trained not to include sensitive personal data in prompts. The Irish Government guidance explicitly emphasises privacy and data governance as a core principle.3) Loss of provenance and accountability
If a reply is drafted by AI, edited by a junior officer, and then approved by a manager — but there is no preserved audit trail of the prompt, edits and approvals — the Council cannot demonstrate who authored the content or why a particular phrasing was chosen. Public communications require provenance for FOI, complaints and legal accountability. Many AI deployments lack integrated prompt and edit logging by default.4) Undisclosed automation and transparency
Citizens have a legitimate expectation to know whether a response came from an automated system or a human. Transparency is not merely ethical; the EU AI Act increasingly demands disclosure for certain AI uses. The Government’s public‑service guidance endorses human oversight — but doesn’t, on its own, provide the technical mechanisms for end‑to‑end transparency. Without explicit labelling and an institutional policy, “AI‑assisted” replies may be indistinguishable from wholly human ones.Each of these risks is manageable — but only through layered policy, technical controls and continuous training.
Where SDCC’s statement helps — and where questions remain
South Dublin County Council’s public reply contains three important points that align to best practice: it states AI is not used to send automated replies; staff handle representations; and managerial approval is applied prior to issuing responses. Those are exactly the kinds of human‑in‑the‑loop arrangements recommended in Ireland’s public‑service guidelines. For readers worried about “AI replacing humans,” such oversight is a critical baseline.But there are gaps in public detail that should concern transparency advocates and councillors alike:
- the figure “25 staff with access” emerges via the Deputy’s account and Echo’s reporting; I could not locate a public SDCC document or Oireachtas transcript that independently records that number or the specific nature of access controls. That makes the claim hard to independently verify from public records. Public bodies adopting AI should publish clear inventories of where AI is used, who has access, and what safeguards govern that access.
- the Council confirms Copilot is available “on a limited basis,” but the precise scope — which teams, which use cases, which tenant settings and whether prompts are logged for audit — is not described in the public statement. These are operational details that materially affect risk.
Regulatory context: the Government guidance and the EU AI Act
Ireland’s Department of Public Expenditure has published Guidelines for the Responsible Use of AI in the Public Service, emphasising seven principles including human oversight, privacy, transparency and accountability. Those guidelines are explicitly intended to be pragmatic: they encourage public bodies to use AI while requiring governance tools, decision frameworks and responsible procurement processes. Councils that use Copilot should therefore map their operational practice against this guidance.Separately, the EU AI Act — now being implemented across member states — introduces a risk‑based legal framework for AI systems, with new obligations for transparency, data governance and third‑party conformity assessments for higher‑risk systems. While a routine email‑drafting assistant may not always fall into the highest risk categories, failure to disclose AI use or to control sensitive data could create compliance exposures. Public bodies must therefore factor EU law into procurement and deployment decisions.
Practical checklist for councils using AI: governance, tech and people
Councils can capture the productivity gains from tools like Copilot while avoiding the pitfalls described above. The following checklist is a practical, implementable framework that aligns with the Irish Government guidance and international best practice:- Define authorised use cases
- List concrete functions where AI assistance is permitted (drafting service updates, summarising reports, internal research).
- Prohibit use in sensitive workflows (casework involving health data, child protection, policing intelligence) unless subject to rigorous approvals.
- Limit access with role‑based controls
- Assign Copilot access only to named roles and teams.
- Maintain a central register of users with AI access (who, when, why).
- Enforce data minimisation in prompts
- Prohibit the inclusion of personal identifiers, special category data, or case‑specific details in prompts unless a secure, on‑premises model and logging regime exist.
- Maintain an audit trail
- Log prompts, model responses, human edits and final approvals and retain these records for a defined period for FOI and complaint handling.
- Require explicit human review and sign‑off
- No AI‑drafted reply should be sent without documented human approval. Managers approving replies should attest to factual accuracy and compliance.
- Label AI‑assisted content
- Where AI was used to draft or significantly shape a reply, include a short, standard disclosure indicating the message was AI‑assisted and human‑approved.
- Train staff continuously
- Deliver mandatory training for all users covering prompt hygiene, hallucinations, confidentiality and escalation procedures.
- Configure tenant and regional settings
- Use tenant controls to keep data within the compliance boundary, and disable any features that export protected content to external model training or unverified storage.
- Run periodic audits and red‑team tests
- Commission an independent review of prompts and outputs; simulate cases that could produce hallucinations or data leakage.
- Communicate publicly
- Publish an AI use statement — a short, plain‑English description of where and how AI is used, and the safeguards in place. This builds trust and reduces surprises when constituents receive unusual messaging.
A deeper technical note: tenant settings, data residency and telemetry
When organisations adopt Microsoft Copilot, administrators face configuration choices that determine data flows. Copilot in enterprise scenarios can be scoped so that:- it only searches content the user has permission to access (reducing the risk of exposing unrelated internal documents);
- it respects app and workspace boundaries (e.g., Copilot for Power BI can be scoped to specific datasets);
- tenant settings can restrict whether prompts or telemetry are processed outside the tenant’s geographic or compliance boundary.
What elected members and members of the public should expect
Transparency and accountability should be the standard. Elected members and constituents have a right to:- receive accurate, attributable replies to casework;
- know whether an AI tool materially shaped the text of a response;
- access an audit trail or the ability to query how a reply was produced in the event of inconsistency.
Recommendations for immediate next steps for South Dublin County Council (and peers)
- Publish an AI transparency statement that lists the services, teams and number of staff with AI access; if the “25 staff” figure is accurate, disclose it formally and explain the scope of their access. Public registries lower suspicion and increase accountability.
- Share the approval workflow publicly (who approves replies, what checks are required, how long logs are retained). That will reassure councillors and constituents and reduce repeat queries about “computerised” messages.
- Require mandatory training on prompt hygiene and hallucination detection for all staff who use AI tools, and document completion.
- Review tenant settings with Microsoft and ensure data residency and telemetry controls align with council policy, the Government’s guidance and the EU AI Act.
- Institute periodic independent audits (technical and editorial) of AI outputs and publish redacted examples showing how AI‑assisted drafts are reviewed and corrected.
Conclusion: AI assistance is not a scapegoat — governance is
The SDCC episode is not an argument against using AI in the public sector; it is a prompt to get governance right. Copilot and similar tools can deliver meaningful efficiencies for cash‑constrained councils, and the Irish Government’s guidelines provide a sensible framework for adoption. But the reputational damage of unexplained, confusing or harmful public messaging is immediate and real. Councils must move from trial and hope to policy, configuration, logging and training.If South Dublin County Council truly does not automate replies — and instead uses Copilot strictly as a human‑assisted drafting aid with managerial sign‑off — that is consistent with responsible use. The remaining question is whether the operational controls match the assurances. Transparency about the number of users, the scope of the tool, the tenant settings, and retained logs will be the clearest way to resolve the dispute and rebuild trust with elected members and the public. For the wider local government sector, the same lesson applies: embrace the productivity gains — but document, disclose and audit them.
Source: Echo.ie ‘We do not use ‘AI’ to generate automated responses’