The Northwest Territories government says it has no plans to create a standalone AI policy for the public service, relying instead on a high‑level generative AI guideline released in May 2025 and existing information‑management rules — a stance that has prompted praise for caution from some quarters and sharp criticism from privacy, legal and labour experts who say the guideline leaves too many operational and accountability gaps.
The GNWT’s decision to avoid a formal AI policy means the territory’s approach to those risks will be shaped by:
Key next steps for GNWT leaders:
The GNWT’s next choices will determine whether the territory harnesses AI for public‑service gain or becomes a case study in how ungoverned generative systems can create outsized problems for small administrations with limited technical capacity. The technical and policy fixes are well‑known; the political and budget decisions to implement them are the real test.
Source: cabinradio.ca NWT government has no plan to develop AI policy
Background
The Government of the Northwest Territories (GNWT) published a high‑level guideline on the use of generative artificial intelligence in May 2025, and the Department of Finance reports the public service has access to an internal “AI Hub” offering the guideline, general training on generative AI and Microsoft Copilot training for employees. The finance minister has publicly described existing cybersecurity and information‑management arrangements as robust enough that the GNWT will not develop a separate, standalone AI policy. The guideline sets out broad expectations: establish rules and responsibilities for generative AI, protect data and outputs, explain why and when AI is used, and monitor AI deployments. It points users to federal guidance on generative AI and to existing GNWT policies on privacy and records handling. At the same time, the GNWT says it has not carried out its own privacy impact assessments for AI tools; instead, it reports it has leveraged assessments from other jurisdictions and is conducting legal reviews of vendor terms for common tools such as Microsoft Copilot. This approach — a minimalist, guideline‑first posture that leans on existing rules rather than a bespoke, enforceable AI policy — is increasingly common among smaller governments and public organisations that want to enable productivity gains while limiting procurement costs and administrative overhead. Yet the practical consequences of that posture are what has triggered debate in Yellowknife and among legal, privacy and labour observers.Why this story matters to IT leaders and public servants
Generative AI is no longer experimental. Tools such as ChatGPT, DALL·E, Claude and Microsoft Copilot are being used across governments to draft text, summarise meetings, help with code and even create images and transcripts. They offer measurable productivity gains, but they also introduce systemic risks: leakage of sensitive data, fabricated or “hallucinated” citations and facts, biased outputs, environmental cost, and new legal and records‑management headaches.The GNWT’s decision to avoid a formal AI policy means the territory’s approach to those risks will be shaped by:
- How strictly existing privacy, cybersecurity and records rules are enforced in AI use cases.
- Whether vendor contracts include enforceable protections such as non‑training clauses, deletion and audit rights.
- Whether tenant and connector settings for productivity assistants (e.g., Copilot) have been independently audited.
- How the GNWT handles transparency and provenance for AI‑assisted outputs that feed into decisions affecting the public.
What the GNWT guideline actually says — and what it doesn’t
What it says
The GNWT guideline is a short, high‑level document that encourages:- Clear roles and responsibilities for generative AI use;
- Safeguards to protect data and manage risks;
- Transparency about why, how and when generative AI is used;
- Ongoing monitoring of generative AI programs.
What it omits or leaves vague
- No binding rules on which tools are approved versus prohibited for official purposes.
- No published list of vetted AI vendors or models, nor an internal model registry.
- No public statement of technical controls required before deploying AI (e.g., DLP rules, connector allow‑lists, prompt logging).
- No published Data Protection Impact Assessments (DPIAs) for AI pilots or deployable tools.
- No clear chain of responsibility for approvals, oversight or incident response.
- No explicit policy on whether AI‑generated outputs and prompts are official records and how they will be retained or redacted.
Independent context: what other governments and regulators are doing
Several Canadian jurisdictions and professional regulators have moved from guidance to enforceable or semi‑enforceable rules in 2024–2025. The federal government released a Guide on the Use of Generative AI in 2025 that details risk‑management expectations for custody of data and human verification. Other provinces and agencies — including British Columbia’s generative AI policy and multiple courts and law societies — have published rules that require disclosure, human verification and records‑management practices when AI helps produce files that go into official processes. Notable trends emerging from other public‑sector responses:- Mandatory human‑in‑the‑loop verification when AI outputs affect entitlements, decisions or legal filings.
- Requirement to disclose AI assistance in court filings or professional submissions.
- Tenant‑level audits (Purview/DLP) and procurement clauses demanding non‑training, deletion and audit rights from vendors.
- Publication of a project registry for AI use cases with risk tiers (low/medium/high) so that high‑risk projects must pass impact assessments and third‑party review before deployment.
Risks the GNWT guideline leaves open — concrete examples
Data leakage through connectors and mis‑classification
Enterprise AI assistants can index repositories and surface documents across the organisation. If document classification is inconsistent, an assistant may surface sensitive material to people without proper clearance. Multiple assessments of public‑sector pilots have found this exact failure mode: indexing plus poor connector hygiene equals leakage. Proper controls require tenant audits and connector allow‑lists before broad rollout.Hallucinations and fabricated citations
Generative models can invent plausible but false facts and citations. Public examples from 2025 and 2026 show the risk is real and costly: government‑commissioned reports in other provinces have contained fabricated sources that appear to have been generated with AI, forcing retractions and reviews. Those incidents underscore the need for mandatory provenance checks and bibliographic verification for outputs used in policy documents.Misinformation during emergencies
AI‑generated imagery can spread quickly on social networks and worsen crisis responses. NWT fire officials publicly condemned an AI‑generated image of a wildfire outside Fort Providence that circulated online as “sensationalized slop,” demonstrating how generative visuals can inflame public fear and confuse emergency communications. Governments must incorporate misinformation response into their AI governance and public communications playbooks.Impacts on Indigenousenous and culturally‑sensitive services
The GNWT serves a disproportionately Indigenous population. Using AI to draft culturally sensitive communications or to make decisions affecting Indigenous communities raises particular risks — from misrepresentation to the inappropriate handling of cultural or sacred information. Experts argue for explicit consultation and cultural‑sensitivity rules before AI is used in these domains.Labour and operational impacts
Unions have raised concerns about AI replacing bargaining‑unit labour or being used to fill vacancies. There are also questions about how AI errors figure into performance reviews and accountability. Responsible adoption requires negotiated workforce plans, reskilling budgets and clarity about what work remains human.Strengths in the GNWT approach — what the guideline does well
- It signals awareness rather than denial. The GNWT has acknowledged generative AI as a material governancut basic principles that link to federal guidance. This cautious, principle‑based stance can reduce knee‑jerk blanket bans that drive staff to unapproved consumer tools.
- Training and an internal AI Hub are positive starting points. Making d centralising resources helps reduce “shadow AI” — staff experimenting with consumer models on personal devices when no sanctioned alternatives exist.
- Lean governance can be nimble. A short, high‑level guideline is easier to update than a heavyweight statute or hard rule, allowing the GNWT to adapt as vendor features and threat models evolve.
Why those strengths aren’t enough — gaps that should concern officials and the public
- Principle without operational gatekeeping is porous. High‑level guidance only reduces risk if it is backed by technical gatekeeping — approved tools, tenant audits, DLP rules, connector controls and immutable logging of prompts and outputs. Experience from other governments shows these are non‑negotiable first steps before broad Copilot‑style rollouts.
- Reliance on vend contract teeth is risky. Public statements that a vendor’s enterprise Copilot “does not leave the government server” must be verified in procurement documents: non‑training clauses, deletion guarantees, telemetry export and audit rights are essential contractual protmust not rely on marketing claims alone.
- The guideline does not substitute for impact assessments. DPIAs and records‑management mapping should be mandatory for any AI project that touches personal data or decision‑facing outputs. Without them, the GNWT risks future Freedom‑of‑Information surprises and legal exposure.
- No public transparency roadmap. For public trust, the GNWT should publish where and how AI is used in the public service, and explain who is responsible when AI contributes to decisions that affect citizens.
Practical, actionable checklist for GNWT leaders (technical and policy priorities)
The GNWT can keep the present guideline but must rapidly operationalise it. The following checklist is ordered and practical — start with the first items within 30–90 days.- Technical readiness and tenant audit (30 days)
- Commission an independent audit of Microsoft tenant settings (Purview, DLP, connector permissions, retention and prompt logging) and publish a redaction and remediation plan.
- Procurement and contract protections (30–60 days)
- Amend AI‑related procurement templates to demand: non‑training clauses, deletion/export rights, audit and telemetry access, data‑residency guarantees (where required), and clear breach‑notification SLAs.
- Project registry and risk tiering (60 days)
AI registry where departments declare use cases, data inputs, vendor contracts and a risk tier. Require a DPIA and third‑party review for high‑risk projects (legal, health, Indigenous services, emergency response). - Records and FOI policy (60–90 days)
- Define whether prompts, outputs and inare official records; set retention schedules and redaction procedures; clarify how FOI requests will be handled.
- Human‑in‑the‑loop mandates (immediate)
- For any output that informs decisions, legal filings or public communications, require named human attestation verifying accuracy and provenance. Courts and law societies are already moving in this direction.
- Workforce and union consultation (30–90 days)
- Negotiate with unions to define where AI can augment work and create reskilling or role‑reprofiling plans for tasks that might change. Make use of pilot evaluation KPIs to demonstrate real benefits and trade‑offs.
- Public transparency (90 days)
- Publish a plain‑language assurance statement for the public explaining where AI is used in direct service delivery, and a channel to request human review of any AI‑assisted decision.
- Ongoing monitoring and red‑teaming (120 days+)
- Fund external red‑team audits for high‑risk models and schedule regular incident reporting and public summaries so the public can see governance in action.
A staged model the GNWT can follow (low technical overhead, high governance value)
- Stage 0 — Contain shadow AI: Block public consumer endpoints from government networks and provide sanctioned, tenant‑managed alternatives. Make licences conditional on training and managerial approval.
- Stage 1 — Sanction low‑risk pilots: Allow summarisation, transcription and first‑draft assistance for low‑sensitivity work with mandatory human verification and prompt logging.
- Stage 2 — Govern medium‑risk use cases: Require DPIAs, retention policies and procurement addenda for any use of AI in services that affect rights, finances or health.
- Stage 3 — High‑risk restriction and external audit: Restrict or forbid unsupervised AI use in adjudication, licensing, entitlement determinations and Indigenous cultural governance unless a rigorous impact assessment and third‑party audit clear the application.
Where the GNWT’s public statements should be verified (claims to treat as provisional untipilot version available to GNWT employees is secure and information does not leave government servers.” This is a vendor‑architecture claim that should be confirmed by an independent tenant audit and by contract clauses that guarantee non‑training and deletion rights. The GNWT says it is conducting legal reviews of vendor terms; that review should be summarised publicly.
- “Existing records, privacy and security policies are adequate.” This is a normative statement; its validity depends on whether those policies explicitly cover AI artifacts (prompts, agent outputs, retriever corpora) and whether they have been updated to reflect AI use cases. Independent DPIAs are the right verification step.
What success looks like — measurable KPIs GNWT should publish
- Time saved per pilot task (baseline vs. AI‑assisted).
- Percentage of AI‑assisted outputs requiring substantive human edits.
- Number of incidents where AI surfaced incorrectly classified or sensitive documents.
- Volume and retention status of prompt logs, with proportion redacted for PII.
- Number of DPIAs completed and their risk-tier outcomes.
- KPIs on carbon intensity per AI request for public reporting on environmental impact.
Final assessment and recommendations
The GNWT’s guideline is a reasonable starting signal that the government recognises generative AI as a policy area. It is also an insufficient operational framework for a government that handles sensitive personal, cultural and legal information. Without clearer vendor‑level contract protections, tenant audits, DPIAs, a model registry and explicit human‑in‑the‑loop and records rules, the GNWT risks incidents that could damage public trust and create legal or operational exposure.Key next steps for GNWT leaders:
- Publish the results of an independent tenant audit and a summary of the legal review of Copilot terms.
- Require department‑level DPIAs for all AI pilots and establish a central registry with named owners and risk tiers.
- Insert enforceable procurement clauses (non‑training, deletion and audit rights) into all AI contracts.
- Negotiate workforce and union agreements on AI augmentation and role redesign.
- Commit to public transparency: publish an annual AI usage and incident summary.
The GNWT’s next choices will determine whether the territory harnesses AI for public‑service gain or becomes a case study in how ungoverned generative systems can create outsized problems for small administrations with limited technical capacity. The technical and policy fixes are well‑known; the political and budget decisions to implement them are the real test.
Source: cabinradio.ca NWT government has no plan to develop AI policy