Corrections has quietly moved from piloting generative tools to policing them: after a small number of staff were found to have used Microsoft Copilot Chat to help draft formal casework — including Extended Supervision Order reports — the department has labelled that behaviour “unacceptable,” launched a privacy risk assessment, and reiterated strict boundaries around what staff may and may not put into AI chat interfaces.
Corrections’ announcement is the latest example of a public-sector organisation confronting the messy gap between enthusiasm for productivity-enhancing AI and the hard legal, ethical, and operational risks that follow when those tools are used around sensitive personal data. The Department of Corrections in New Zealand says its authorised AI footprint is deliberately narrow: staff may access a standalone Copilot Chat feature that sits under the organisation’s Microsoft 365 licence, while other third‑party AI apps are blocked on the Corrections network. That restriction is intended to keep AI interactions inside an enterprise-controlled environment with established privacy and security controls.
At the same time, the department’s guidance is explicit: personal information — names, identifiers, health or medical information, and details relating to people under Corrections’ management — must not be entered into Copilot Chat, and the tool must not be used to draft, structure, analyse, or generate content for reports or assessments that contain personal information. Corrections also reports that since Copilot was introduced on managed devices in November 2025, roughly 30 percent of staff have engaged with the tool. A privacy risk assessment has already been completed where misuse was identified, and the department says auditing of prompts is possible because prompts and responses are searchable and exportable for review.
Putting case-sensitive corrections material — criminal-history details, mental‑health notes, rehabilitation progress and supervision conditions — into a generative AI chat is not a theoretical risk. Even if prompts and responses are kept within a Microsoft tenant, mistakes in prompts, misconfiguration, or unauthorized copying of outputs can create real and lasting privacy harms. The legal and reputational consequences for agencies that fail to protect this data are significant.
New Zealand’s Government Chief Digital Officer (GCDO) has been leading efforts to create an All‑of‑Government approach to AI, including communities of practice and public service guidance on GenAI. That central leadership is important: consistency across agencies reduces the risk that one agency’s mistake becomes a systemic problem. Corrections’ participation in the All‑of‑Government community of practice is therefore an important compliance signal — but participation does not obviate the need for local competence and enforcement.
But the incident also illustrates an uncomfortable truth for public administration: introducing advanced AI into high‑stakes workflows exposes latent gaps in training, culture, technical configuration and governance. Enterprise features such as Microsoft’s Copilot Chat and EDP provide important protections, yet they are not substitutes for rigorous human oversight, role‑based limits, and ongoing audit and training regimes. Agencies that treat AI as a simple productivity upgrade rather than an organisational change risk harming the very people they are mandated to protect.
For Corrections and other agencies working with sensitive personal information, the calculus must remain conservative: use AI to reduce mundane administrative friction, but never to shortcut the human judgement, accountability, and legal safeguards that underpin public trust.
Source: Otago Daily Times Corrections labels staff's AI use as 'unacceptable'
Background
Corrections’ announcement is the latest example of a public-sector organisation confronting the messy gap between enthusiasm for productivity-enhancing AI and the hard legal, ethical, and operational risks that follow when those tools are used around sensitive personal data. The Department of Corrections in New Zealand says its authorised AI footprint is deliberately narrow: staff may access a standalone Copilot Chat feature that sits under the organisation’s Microsoft 365 licence, while other third‑party AI apps are blocked on the Corrections network. That restriction is intended to keep AI interactions inside an enterprise-controlled environment with established privacy and security controls.At the same time, the department’s guidance is explicit: personal information — names, identifiers, health or medical information, and details relating to people under Corrections’ management — must not be entered into Copilot Chat, and the tool must not be used to draft, structure, analyse, or generate content for reports or assessments that contain personal information. Corrections also reports that since Copilot was introduced on managed devices in November 2025, roughly 30 percent of staff have engaged with the tool. A privacy risk assessment has already been completed where misuse was identified, and the department says auditing of prompts is possible because prompts and responses are searchable and exportable for review.
What happened — a concise account
- Corrections discovered a small number of incidents in which staff used Copilot Chat in ways that contravened the AI policy — specifically, assisting with the drafting of formal reports that contain personal information.
- The department restricted access to Copilot so that only the free Copilot Chat feature available via Microsoft 365 is permitted on managed devices; other public AI applications are blocked on the Corrections network.
- The agency completed a privacy risk assessment in response to the incidents and has reminded staff that misuse is “unacceptable”; it also stated that it has an AI assurance function within cybersecurity and participates in the All‑of‑Government community of practice on AI.
Why this matters: legal, safety and ethical stakes
The Privacy Act is the baseline
Under New Zealand’s Privacy Act, the collection, use and disclosure of personal information — including through AI tools — remain regulated activities. Agencies are responsible for ensuring that their use of technology complies with privacy principles: that personal information is collected lawfully, is securely held, is only used for authorized purposes, and that individuals can exercise their rights to access and correct information about themselves. The Office of the Privacy Commissioner has been explicit: the Privacy Act applies to AI-driven uses and agencies must understand the technologies they deploy and ensure that usages meet privacy requirements.Putting case-sensitive corrections material — criminal-history details, mental‑health notes, rehabilitation progress and supervision conditions — into a generative AI chat is not a theoretical risk. Even if prompts and responses are kept within a Microsoft tenant, mistakes in prompts, misconfiguration, or unauthorized copying of outputs can create real and lasting privacy harms. The legal and reputational consequences for agencies that fail to protect this data are significant.
Accuracy, explainability and downstream effects
Generative AI is not a neutral drafting assistant; it can hallucinate, omit salient context, or reframe narratives in ways that alter meaning. For frontline probation officers and community corrections staff, a report that misstates a risk factor or misattributes a behavioural sign can materially affect an individual’s liberty and supervision conditions.- AI outputs require human oversight. Corrections’ policy that Copilot should be used only for non‑sensitive, assistive tasks is aligned with best practice: where outputs affect decisions about people, a trained human must review, correct and take responsibility for the final document.
- Because corrections reports are often used in courts and to inform ministerial or judicial decisions, their provenance and accuracy must be defensible. Accepting AI-drafted paragraphs without robust verification is a pathway to error and liability.
Security and data residency questions
Microsoft emphasises enterprise data protection for Copilot Chat: prompts and responses for enterprise users are logged subject to tenant policies and commitments, and are not used by Microsoft to train its foundation models under enterprise terms. However, that model still requires trust in cloud controls, the tenant configuration, and the surrounding operational practices. Agencies must consider where LLM calls are actually processed, what metadata is created, and how logs are stored and protected. Relying on a vendor’s enterprise safeguards reduces some risks but does not eliminate them.Corrections’ controls: what they’ve done and where the gaps remain
Controls in place
- Network-level blocking of public AI apps outside the approved Microsoft Copilot Chat deployment.
- A written AI policy aligned to Government Chief Digital Officer guidance, explicitly forbidding entry of personal information into Copilot Chat and the drafting of personal-data reports.
- Auditability: prompts and outputs are searchable and exportable for review, enabling retrospective investigations where misuse is suspected.
- An organisational AI governance structure: an AI assurance officer sits inside the directorate for cybersecurity and an AI working group provides governance and guidance.
Where practical problems remain
- Policy adherence vs. frontline realities: staff under time pressure or with limited digital literacy may still be tempted to paste sensitive material into a chat for drafting efficiency. Policies are only as effective as training, enforcement, and the usability of approved tools and processes.
- Culture and incentives: if staff perceive AI as a faster route to completing paperwork and risk little accountability, policy alone won’t stop careless behaviours. Corrections’ statement that it will audit and has issued reminders is necessary but not sufficient to change day‑to‑day practice.
- Technical nuance: the difference between Copilot Chat as a free feature and the full Microsoft 365 Copilot enterprise product has operational implications. If staff sign into Copilot Chat with their enterprise credentials, enterprise data protections can apply — but that depends on correct tenant configuration and the particular Copilot feature set in use. Misunderstanding these differences can create a false sense of security.
The political and public-administration angle
Public servants working with highly sensitive citizen information are held to high standards of accountability. When an agency like Corrections (which manages people under supervision and those in custody) reports misuse of AI, it raises questions about procurement, training, and governance across the wider public sector.New Zealand’s Government Chief Digital Officer (GCDO) has been leading efforts to create an All‑of‑Government approach to AI, including communities of practice and public service guidance on GenAI. That central leadership is important: consistency across agencies reduces the risk that one agency’s mistake becomes a systemic problem. Corrections’ participation in the All‑of‑Government community of practice is therefore an important compliance signal — but participation does not obviate the need for local competence and enforcement.
What Corrections and similar agencies should do — operational recommendations
Below are practical actions that public agencies handling sensitive personal information should adopt immediately. These are ranked in order of priority.- Reinforce “no personal data” rules with mandatory, scenario-based training for all staff who might interact with AI tools.
- Implement technical controls at the tenant and endpoint level to prevent copy‑paste of classified fields into chatboxes and to warn or block when certain data classes are detected.
- Require that any AI‑assisted draft of operational casework include an explicit, auditable human review checklist before submission, with name, role and timestamp captured.
- Enforce logging and periodic audits of AI prompts and responses, with automated red-flags for prompts that include personally identifiable information (PII).
- Use data‑classification policies to create whitelists and blacklists: identify what information is absolutely off‑limits for generative models and ensure those controls are enforced by tooling.
- Consider a staged deployment strategy: start with low‑risk pilots, evaluate outcomes, and expand only when compliance and safety metrics are consistently met.
- Maintain a clear breach response playbook that includes timely notification to privacy authorities, affected individuals, and transparent internal reporting.
Technical mitigations and choices
- Enterprise data protection (EDP) and tenant isolation: configure Copilot so prompts and responses are retained within the Microsoft 365 service boundary, and confirm contractual and technical guarantees that enterprise data is not used to improve vendor models. But note: EDP relies on correct configuration and active enforcement. Regular validation by internal auditors is essential.
- Data loss prevention (DLP) integrated with Copilot: modern DLP tools can detect PII and prevent it from being pasted into chat UIs. This should be paired with contextual user warnings and mandatory supervisor approval for edge cases.
- Role-based access controls: limit the features available to frontline staff. For instance, allow Copilot to suggest wording for generic administrative communications but disable file upload and document grounding where sensitive files could be referenced.
- Private LLM instances for high-risk workflows: for truly sensitive workloads some agencies may choose dedicated, on‑premises or contractually segregated cloud LLMs that offer stronger contractual data residency and non‑training guarantees. This is more expensive but the right choice where the risk profile demands it.
- Tamper-evident audit trails: prompts used in decision‑relevant documents should be captured in immutable logs, linked to the final human‑approved document, and retained under a clear retention policy.
Training, culture and the human factor
Technology controls alone won’t prevent misuse. Corrections’ emphasis on ongoing conversations with Community Corrections staff and regular reminders of its AI policy is the right cultural move — but more is required.- Training should be practical and role-based: short modules that show exactly what is and isn’t allowed, with examples drawn from real reporting tasks.
- Supervisors must be capable of identifying AI artifacts in prose and challenging staff drafts when AI has been used. This requires training for managers as much as frontline users.
- Reward structures should not unintentionally incentivise cutting corners. If performance metrics prioritise throughput without safeguarding quality, staff will seek shortcuts.
Accountability: audits, sanctions and reporting
Corrections makes two consequential points: prompts and responses are auditable, and misuse is treated “extremely seriously.” That creates a path for accountability, but public agencies must create a balanced enforcement regime that focuses on remediation and learning rather than only punishment.- For inadvertent or low‑harm incidents: mandated refresher training, documented remediation plans, and supervised rework of affected reports.
- For willful or repeated breaches: formal disciplinary processes, escalation internally and — where required by privacy law — notification to the Office of the Privacy Commissioner and affected individuals.
- For systemic failures (e.g., misconfigured tenant, inadequate DLP, no human review): commissioning an independent review, public reporting of findings, and concrete timelines for remediation.
Broader lessons for government: what other agencies should take from this
- Central guidance is necessary but not sufficient. The GCDO’s All‑of‑Government work and public‑service AI guidance are essential backstops; agencies must operationalise those principles locally.
- Vendor guarantees matter — but so does independent verification. Microsoft’s enterprise promises for Copilot’s data protections reduce risk, but agencies must validate configuration and monitor telemetry.
- Low‑risk pilots will surface cultural and training gaps. Use them to fix process and governance before rolling tools into higher‑risk casework.
- Don’t treat AI as a productivity plug‑in only: treat it as an organisational change that touches policy, procurement, audit, legal, privacy, HR and frontline operations.
Risks to watch beyond privacy and accuracy
- Re-identification risks from “de‑identified” text: even partial personal details can be combined to re‑identify individuals, particularly in small communities or where cases are high‑profile.
- Differential impact and bias: AI outputs may contain subtle cultural, gender or ethnic biases that can skew assessment language or the framing of risk.
- Chaining and provenance: if an AI‑assisted report is used to justify further automated decisions, errors compound and become harder to reverse.
- Vendor dependency and supply‑chain risk: heavy reliance on a single cloud vendor for both productivity and AI increases systemic exposure to outages, policy changes, or contractual disputes.
Conclusion
Corrections’ decision to call staff use of AI outside approved parameters “unacceptable,” and to conduct a privacy risk assessment, is the right immediate response to identified misuse. The agency’s mix of technical controls, policy clarity, auditability and engagement with All‑of‑Government AI governance gives it a structured path to safer adoption.But the incident also illustrates an uncomfortable truth for public administration: introducing advanced AI into high‑stakes workflows exposes latent gaps in training, culture, technical configuration and governance. Enterprise features such as Microsoft’s Copilot Chat and EDP provide important protections, yet they are not substitutes for rigorous human oversight, role‑based limits, and ongoing audit and training regimes. Agencies that treat AI as a simple productivity upgrade rather than an organisational change risk harming the very people they are mandated to protect.
For Corrections and other agencies working with sensitive personal information, the calculus must remain conservative: use AI to reduce mundane administrative friction, but never to shortcut the human judgement, accountability, and legal safeguards that underpin public trust.
Source: Otago Daily Times Corrections labels staff's AI use as 'unacceptable'

