• Thread Author
The Northwest Territories government says it has no plans to create a standalone AI policy for the public service, relying instead on a high‑level generative AI guideline released in May 2025 and existing information‑management rules — a stance that has prompted praise for caution from some quarters and sharp criticism from privacy, legal and labour experts who say the guideline leaves too many operational and accountability gaps.

Background​

The Government of the Northwest Territories (GNWT) published a high‑level guideline on the use of generative artificial intelligence in May 2025, and the Department of Finance reports the public service has access to an internal “AI Hub” offering the guideline, general training on generative AI and Microsoft Copilot training for employees. The finance minister has publicly described existing cybersecurity and information‑management arrangements as robust enough that the GNWT will not develop a separate, standalone AI policy. The guideline sets out broad expectations: establish rules and responsibilities for generative AI, protect data and outputs, explain why and when AI is used, and monitor AI deployments. It points users to federal guidance on generative AI and to existing GNWT policies on privacy and records handling. At the same time, the GNWT says it has not carried out its own privacy impact assessments for AI tools; instead, it reports it has leveraged assessments from other jurisdictions and is conducting legal reviews of vendor terms for common tools such as Microsoft Copilot. This approach — a minimalist, guideline‑first posture that leans on existing rules rather than a bespoke, enforceable AI policy — is increasingly common among smaller governments and public organisations that want to enable productivity gains while limiting procurement costs and administrative overhead. Yet the practical consequences of that posture are what has triggered debate in Yellowknife and among legal, privacy and labour observers.

Why this story matters to IT leaders and public servants​

Generative AI is no longer experimental. Tools such as ChatGPT, DALL·E, Claude and Microsoft Copilot are being used across governments to draft text, summarise meetings, help with code and even create images and transcripts. They offer measurable productivity gains, but they also introduce systemic risks: leakage of sensitive data, fabricated or “hallucinated” citations and facts, biased outputs, environmental cost, and new legal and records‑management headaches.
The GNWT’s decision to avoid a formal AI policy means the territory’s approach to those risks will be shaped by:
  • How strictly existing privacy, cybersecurity and records rules are enforced in AI use cases.
  • Whether vendor contracts include enforceable protections such as non‑training clauses, deletion and audit rights.
  • Whether tenant and connector settings for productivity assistants (e.g., Copilot) have been independently audited.
  • How the GNWT handles transparency and provenance for AI‑assisted outputs that feed into decisions affecting the public.
Cabin Radio’s reporting captures both the GNWT’s position and the pushback from experts worried the guideline is insufficient for operational risk management.

What the GNWT guideline actually says — and what it doesn’t​

What it says​

The GNWT guideline is a short, high‑level document that encourages:
  • Clear roles and responsibilities for generative AI use;
  • Safeguards to protect data and manage risks;
  • Transparency about why, how and when generative AI is used;
  • Ongoing monitoring of generative AI programs.
It explicitly links to the federal government’s guide on generative AI, signalling that the territory intends to follow national best practices rather than create its own regulatory architecture. The finance department says employees have access to training materials via an internal AI Hub.

What it omits or leaves vague​

  • No binding rules on which tools are approved versus prohibited for official purposes.
  • No published list of vetted AI vendors or models, nor an internal model registry.
  • No public statement of technical controls required before deploying AI (e.g., DLP rules, connector allow‑lists, prompt logging).
  • No published Data Protection Impact Assessments (DPIAs) for AI pilots or deployable tools.
  • No clear chain of responsibility for approvals, oversight or incident response.
  • No explicit policy on whether AI‑generated outputs and prompts are official records and how they will be retained or redacted.
Legal and privacy experts quoted in the coverage say the guideline reads as a statement of intent rather than enforceable governance and that the GNWT needs more specific policies tailored to different risk tiers and use cases.

Independent context: what other governments and regulators are doing​

Several Canadian jurisdictions and professional regulators have moved from guidance to enforceable or semi‑enforceable rules in 2024–2025. The federal government released a Guide on the Use of Generative AI in 2025 that details risk‑management expectations for custody of data and human verification. Other provinces and agencies — including British Columbia’s generative AI policy and multiple courts and law societies — have published rules that require disclosure, human verification and records‑management practices when AI helps produce files that go into official processes. Notable trends emerging from other public‑sector responses:
  • Mandatory human‑in‑the‑loop verification when AI outputs affect entitlements, decisions or legal filings.
  • Requirement to disclose AI assistance in court filings or professional submissions.
  • Tenant‑level audits (Purview/DLP) and procurement clauses demanding non‑training, deletion and audit rights from vendors.
  • Publication of a project registry for AI use cases with risk tiers (low/medium/high) so that high‑risk projects must pass impact assessments and third‑party review before deployment.
These operational controls are not theoretical: experience in other governments shows that pilot programs but also reveal misconfigurations that lead to sensitive‑data exposure, indexing of internal documents, and increased verification overhead. Independent analyses recommend technical readiness checks, procurement clauses and prompt‑and‑response logging as near‑universal first steps.

Risks the GNWT guideline leaves open — concrete examples​

Data leakage through connectors and mis‑classification​

Enterprise AI assistants can index repositories and surface documents across the organisation. If document classification is inconsistent, an assistant may surface sensitive material to people without proper clearance. Multiple assessments of public‑sector pilots have found this exact failure mode: indexing plus poor connector hygiene equals leakage. Proper controls require tenant audits and connector allow‑lists before broad rollout.

Hallucinations and fabricated citations​

Generative models can invent plausible but false facts and citations. Public examples from 2025 and 2026 show the risk is real and costly: government‑commissioned reports in other provinces have contained fabricated sources that appear to have been generated with AI, forcing retractions and reviews. Those incidents underscore the need for mandatory provenance checks and bibliographic verification for outputs used in policy documents.

Misinformation during emergencies​

AI‑generated imagery can spread quickly on social networks and worsen crisis responses. NWT fire officials publicly condemned an AI‑generated image of a wildfire outside Fort Providence that circulated online as “sensationalized slop,” demonstrating how generative visuals can inflame public fear and confuse emergency communications. Governments must incorporate misinformation response into their AI governance and public communications playbooks.

Impacts on Indigenousenous and culturally‑sensitive services​

The GNWT serves a disproportionately Indigenous population. Using AI to draft culturally sensitive communications or to make decisions affecting Indigenous communities raises particular risks — from misrepresentation to the inappropriate handling of cultural or sacred information. Experts argue for explicit consultation and cultural‑sensitivity rules before AI is used in these domains.

Labour and operational impacts​

Unions have raised concerns about AI replacing bargaining‑unit labour or being used to fill vacancies. There are also questions about how AI errors figure into performance reviews and accountability. Responsible adoption requires negotiated workforce plans, reskilling budgets and clarity about what work remains human.

Strengths in the GNWT approach — what the guideline does well​

  • It signals awareness rather than denial. The GNWT has acknowledged generative AI as a material governancut basic principles that link to federal guidance. This cautious, principle‑based stance can reduce knee‑jerk blanket bans that drive staff to unapproved consumer tools.
  • Training and an internal AI Hub are positive starting points. Making d centralising resources helps reduce “shadow AI” — staff experimenting with consumer models on personal devices when no sanctioned alternatives exist.
  • Lean governance can be nimble. A short, high‑level guideline is easier to update than a heavyweight statute or hard rule, allowing the GNWT to adapt as vendor features and threat models evolve.
These strengths matter for an administration with limited procurement and technical capacity; they create room to iterate rather than lock the government into a prematurely rigid regime.

Why those strengths aren’t enough — gaps that should concern officials and the public​

  • Principle without operational gatekeeping is porous. High‑level guidance only reduces risk if it is backed by technical gatekeeping — approved tools, tenant audits, DLP rules, connector controls and immutable logging of prompts and outputs. Experience from other governments shows these are non‑negotiable first steps before broad Copilot‑style rollouts.
  • Reliance on vend contract teeth is risky. Public statements that a vendor’s enterprise Copilot “does not leave the government server” must be verified in procurement documents: non‑training clauses, deletion guarantees, telemetry export and audit rights are essential contractual protmust not rely on marketing claims alone.
  • The guideline does not substitute for impact assessments. DPIAs and records‑management mapping should be mandatory for any AI project that touches personal data or decision‑facing outputs. Without them, the GNWT risks future Freedom‑of‑Information surprises and legal exposure.
  • No public transparency roadmap. For public trust, the GNWT should publish where and how AI is used in the public service, and explain who is responsible when AI contributes to decisions that affect citizens.

Practical, actionable checklist for GNWT leaders (technical and policy priorities)​

The GNWT can keep the present guideline but must rapidly operationalise it. The following checklist is ordered and practical — start with the first items within 30–90 days.
  1. Technical readiness and tenant audit (30 days)
    • Commission an independent audit of Microsoft tenant settings (Purview, DLP, connector permissions, retention and prompt logging) and publish a redaction and remediation plan.
  2. Procurement and contract protections (30–60 days)
    • Amend AI‑related procurement templates to demand: non‑training clauses, deletion/export rights, audit and telemetry access, data‑residency guarantees (where required), and clear breach‑notification SLAs.
  3. Project registry and risk tiering (60 days)
    AI registry where departments declare use cases, data inputs, vendor contracts and a risk tier. Require a DPIA and third‑party review for high‑risk projects (legal, health, Indigenous services, emergency response).
  4. Records and FOI policy (60–90 days)
    • Define whether prompts, outputs and inare official records; set retention schedules and redaction procedures; clarify how FOI requests will be handled.
  5. Human‑in‑the‑loop mandates (immediate)
    • For any output that informs decisions, legal filings or public communications, require named human attestation verifying accuracy and provenance. Courts and law societies are already moving in this direction.
  6. Workforce and union consultation (30–90 days)
    • Negotiate with unions to define where AI can augment work and create reskilling or role‑reprofiling plans for tasks that might change. Make use of pilot evaluation KPIs to demonstrate real benefits and trade‑offs.
  7. Public transparency (90 days)
    • Publish a plain‑language assurance statement for the public explaining where AI is used in direct service delivery, and a channel to request human review of any AI‑assisted decision.
  8. Ongoing monitoring and red‑teaming (120 days+)
    • Fund external red‑team audits for high‑risk models and schedule regular incident reporting and public summaries so the public can see governance in action.

A staged model the GNWT can follow (low technical overhead, high governance value)​

  • Stage 0 — Contain shadow AI: Block public consumer endpoints from government networks and provide sanctioned, tenant‑managed alternatives. Make licences conditional on training and managerial approval.
  • Stage 1 — Sanction low‑risk pilots: Allow summarisation, transcription and first‑draft assistance for low‑sensitivity work with mandatory human verification and prompt logging.
  • Stage 2 — Govern medium‑risk use cases: Require DPIAs, retention policies and procurement addenda for any use of AI in services that affect rights, finances or health.
  • Stage 3 — High‑risk restriction and external audit: Restrict or forbid unsupervised AI use in adjudication, licensing, entitlement determinations and Indigenous cultural governance unless a rigorous impact assessment and third‑party audit clear the application.
This staged approach gives the GNWT a place to start while retaining the ability to scale governance as adoption expands.

Where the GNWT’s public statements should be verified (claims to treat as provisional untipilot version available to GNWT employees is secure and information does not leave government servers.” This is a vendor‑architecture claim that should be confirmed by an independent tenant audit and by contract clauses that guarantee non‑training and deletion rights. The GNWT says it is conducting legal reviews of vendor terms; that review should be summarised publicly.​

  • “Existing records, privacy and security policies are adequate.” This is a normative statement; its validity depends on whether those policies explicitly cover AI artifacts (prompts, agent outputs, retriever corpora) and whether they have been updated to reflect AI use cases. Independent DPIAs are the right verification step.
Flag: until tenant audits and procurement clauses are produced and independent DPIAs are published, both claims should be treated as provisional.

What success looks like — measurable KPIs GNWT should publish​

  • Time saved per pilot task (baseline vs. AI‑assisted).
  • Percentage of AI‑assisted outputs requiring substantive human edits.
  • Number of incidents where AI surfaced incorrectly classified or sensitive documents.
  • Volume and retention status of prompt logs, with proportion redacted for PII.
  • Number of DPIAs completed and their risk-tier outcomes.
  • KPIs on carbon intensity per AI request for public reporting on environmental impact.
Measuring both productivity and harm metrics will allow GNWT to justify scaling or tightening controls. Practical experience shows organisations that track both benefits and errors can make evidence‑based policy decisions instead of reacting to crises.

Final assessment and recommendations​

The GNWT’s guideline is a reasonable starting signal that the government recognises generative AI as a policy area. It is also an insufficient operational framework for a government that handles sensitive personal, cultural and legal information. Without clearer vendor‑level contract protections, tenant audits, DPIAs, a model registry and explicit human‑in‑the‑loop and records rules, the GNWT risks incidents that could damage public trust and create legal or operational exposure.
Key next steps for GNWT leaders:
  • Publish the results of an independent tenant audit and a summary of the legal review of Copilot terms.
  • Require department‑level DPIAs for all AI pilots and establish a central registry with named owners and risk tiers.
  • Insert enforceable procurement clauses (non‑training, deletion and audit rights) into all AI contracts.
  • Negotiate workforce and union agreements on AI augmentation and role redesign.
  • Commit to public transparency: publish an annual AI usage and incident summary.
The danger is not the technology itself but the lack of governance when that technology touches government records, court filings, emergency communications and culturally sensitive services. GNWT can keep its agile, guideline‑first posture — but only if it pairs that posture with hard, verifiable controls and independent assurance. Short of that, the promise of productivity will likely be eclipsed by the costs of surprise incidents, eroded public trust and preventable legal exposure.
The GNWT’s next choices will determine whether the territory harnesses AI for public‑service gain or becomes a case study in how ungoverned generative systems can create outsized problems for small administrations with limited technical capacity. The technical and policy fixes are well‑known; the political and budget decisions to implement them are the real test.

Source: cabinradio.ca NWT government has no plan to develop AI policy
 

The Northwest Territories government says it has no plans to create a standalone AI policy for the public service, relying instead on a high‑level generative AI guideline released in May 2025 and existing information‑management rules — a stance that has prompted praise for caution from some quarters and sharp criticism from privacy, legal and labour experts who say the guideline leaves too many operational and accountability gaps.

Background​

The Government of the Northwest Territories (GNWT) published a high‑level guideline on the use of generative artificial intelligence in May 2025, and the Department of Finance reports the public service has access to an internal “AI Hub” offering the guideline, general training on generative AI and Microsoft Copilot training for employees. The finance minister has publicly described existing cybersecurity and information‑management arrangements as robust enough that the GNWT will not develop a separate, standalone AI policy. The guideline sets out broad expectations: establish rules and responsibilities for generative AI, protect data and outputs, explain why and when AI is used, and monitor AI deployments. It points users to federal guidance on generative AI and to existing GNWT policies on privacy and records handling. At the same time, the GNWT says it has not carried out its own privacy impact assessments for AI tools; instead, it reports it has leveraged assessments from other jurisdictions and is conducting legal reviews of vendor terms for common tools such as Microsoft Copilot. This approach — a minimalist, guideline‑first posture that leans on existing rules rather than a bespoke, enforceable AI policy — is increasingly common among smaller governments and public organisations that want to enable productivity gains while limiting procurement costs and administrative overhead. Yet the practical consequences of that posture are what has triggered debate in Yellowknife and among legal, privacy and labour observers.

Why this story matters to IT leaders and public servants​

Generative AI is no longer experimental. Tools such as ChatGPT, DALL·E, Claude and Microsoft Copilot are being used across governments to draft text, summarise meetings, help with code and even create images and transcripts. They offer measurable productivity gains, but they also introduce systemic risks: leakage of sensitive data, fabricated or “hallucinated” citations and facts, biased outputs, environmental cost, and new legal and records‑management headaches.
The GNWT’s decision to avoid a formal AI policy means the territory’s approach to those risks will be shaped by:
  • How strictly existing privacy, cybersecurity and records rules are enforced in AI use cases.
  • Whether vendor contracts include enforceable protections such as non‑training clauses, deletion and audit rights.
  • Whether tenant and connector settings for productivity assistants (e.g., Copilot) have been independently audited.
  • How the GNWT handles transparency and provenance for AI‑assisted outputs that feed into decisions affecting the public.
Cabin Radio’s reporting captures both the GNWT’s position and the pushback from experts worried the guideline is insufficient for operational risk management.

What the GNWT guideline actually says — and what it doesn’t​

What it says​

The GNWT guideline is a short, high‑level document that encourages:
  • Clear roles and responsibilities for generative AI use;
  • Safeguards to protect data and manage risks;
  • Transparency about why, how and when generative AI is used;
  • Ongoing monitoring of generative AI programs.
It explicitly links to the federal government’s guide on generative AI, signalling that the territory intends to follow national best practices rather than create its own regulatory architecture. The finance department says employees have access to training materials via an internal AI Hub.

What it omits or leaves vague​

  • No binding rules on which tools are approved versus prohibited for official purposes.
  • No published list of vetted AI vendors or models, nor an internal model registry.
  • No public statement of technical controls required before deploying AI (e.g., DLP rules, connector allow‑lists, prompt logging).
  • No published Data Protection Impact Assessments (DPIAs) for AI pilots or deployable tools.
  • No clear chain of responsibility for approvals, oversight or incident response.
  • No explicit policy on whether AI‑generated outputs and prompts are official records and how they will be retained or redacted.
Legal and privacy experts quoted in the coverage say the guideline reads as a statement of intent rather than enforceable governance and that the GNWT needs more specific policies tailored to different risk tiers and use cases.

Independent context: what other governments and regulators are doing​

Several Canadian jurisdictions and professional regulators have moved from guidance to enforceable or semi‑enforceable rules in 2024–2025. The federal government released a Guide on the Use of Generative AI in 2025 that details risk‑management expectations for custody of data and human verification. Other provinces and agencies — including British Columbia’s generative AI policy and multiple courts and law societies — have published rules that require disclosure, human verification and records‑management practices when AI helps produce files that go into official processes. Notable trends emerging from other public‑sector responses:
  • Mandatory human‑in‑the‑loop verification when AI outputs affect entitlements, decisions or legal filings.
  • Requirement to disclose AI assistance in court filings or professional submissions.
  • Tenant‑level audits (Purview/DLP) and procurement clauses demanding non‑training, deletion and audit rights from vendors.
  • Publication of a project registry for AI use cases with risk tiers (low/medium/high) so that high‑risk projects must pass impact assessments and third‑party review before deployment.
These operational controls are not theoretical: experience in other governments shows that pilot programs but also reveal misconfigurations that lead to sensitive‑data exposure, indexing of internal documents, and increased verification overhead. Independent analyses recommend technical readiness checks, procurement clauses and prompt‑and‑response logging as near‑universal first steps.

Risks the GNWT guideline leaves open — concrete examples​

Data leakage through connectors and mis‑classification​

Enterprise AI assistants can index repositories and surface documents across the organisation. If document classification is inconsistent, an assistant may surface sensitive material to people without proper clearance. Multiple assessments of public‑sector pilots have found this exact failure mode: indexing plus poor connector hygiene equals leakage. Proper controls require tenant audits and connector allow‑lists before broad rollout.

Hallucinations and fabricated citations​

Generative models can invent plausible but false facts and citations. Public examples from 2025 and 2026 show the risk is real and costly: government‑commissioned reports in other provinces have contained fabricated sources that appear to have been generated with AI, forcing retractions and reviews. Those incidents underscore the need for mandatory provenance checks and bibliographic verification for outputs used in policy documents.

Misinformation during emergencies​

AI‑generated imagery can spread quickly on social networks and worsen crisis responses. NWT fire officials publicly condemned an AI‑generated image of a wildfire outside Fort Providence that circulated online as “sensationalized slop,” demonstrating how generative visuals can inflame public fear and confuse emergency communications. Governments must incorporate misinformation response into their AI governance and public communications playbooks.

Impacts on Indigenousenous and culturally‑sensitive services​

The GNWT serves a disproportionately Indigenous population. Using AI to draft culturally sensitive communications or to make decisions affecting Indigenous communities raises particular risks — from misrepresentation to the inappropriate handling of cultural or sacred information. Experts argue for explicit consultation and cultural‑sensitivity rules before AI is used in these domains.

Labour and operational impacts​

Unions have raised concerns about AI replacing bargaining‑unit labour or being used to fill vacancies. There are also questions about how AI errors figure into performance reviews and accountability. Responsible adoption requires negotiated workforce plans, reskilling budgets and clarity about what work remains human.

Strengths in the GNWT approach — what the guideline does well​

  • It signals awareness rather than denial. The GNWT has acknowledged generative AI as a material governancut basic principles that link to federal guidance. This cautious, principle‑based stance can reduce knee‑jerk blanket bans that drive staff to unapproved consumer tools.
  • Training and an internal AI Hub are positive starting points. Making d centralising resources helps reduce “shadow AI” — staff experimenting with consumer models on personal devices when no sanctioned alternatives exist.
  • Lean governance can be nimble. A short, high‑level guideline is easier to update than a heavyweight statute or hard rule, allowing the GNWT to adapt as vendor features and threat models evolve.
These strengths matter for an administration with limited procurement and technical capacity; they create room to iterate rather than lock the government into a prematurely rigid regime.

Why those strengths aren’t enough — gaps that should concern officials and the public​

  • Principle without operational gatekeeping is porous. High‑level guidance only reduces risk if it is backed by technical gatekeeping — approved tools, tenant audits, DLP rules, connector controls and immutable logging of prompts and outputs. Experience from other governments shows these are non‑negotiable first steps before broad Copilot‑style rollouts.
  • Reliance on vend contract teeth is risky. Public statements that a vendor’s enterprise Copilot “does not leave the government server” must be verified in procurement documents: non‑training clauses, deletion guarantees, telemetry export and audit rights are essential contractual protmust not rely on marketing claims alone.
  • The guideline does not substitute for impact assessments. DPIAs and records‑management mapping should be mandatory for any AI project that touches personal data or decision‑facing outputs. Without them, the GNWT risks future Freedom‑of‑Information surprises and legal exposure.
  • No public transparency roadmap. For public trust, the GNWT should publish where and how AI is used in the public service, and explain who is responsible when AI contributes to decisions that affect citizens.

Practical, actionable checklist for GNWT leaders (technical and policy priorities)​

The GNWT can keep the present guideline but must rapidly operationalise it. The following checklist is ordered and practical — start with the first items within 30–90 days.
  1. Technical readiness and tenant audit (30 days)
    • Commission an independent audit of Microsoft tenant settings (Purview, DLP, connector permissions, retention and prompt logging) and publish a redaction and remediation plan.
  2. Procurement and contract protections (30–60 days)
    • Amend AI‑related procurement templates to demand: non‑training clauses, deletion/export rights, audit and telemetry access, data‑residency guarantees (where required), and clear breach‑notification SLAs.
  3. Project registry and risk tiering (60 days)
    AI registry where departments declare use cases, data inputs, vendor contracts and a risk tier. Require a DPIA and third‑party review for high‑risk projects (legal, health, Indigenous services, emergency response).
  4. Records and FOI policy (60–90 days)
    • Define whether prompts, outputs and inare official records; set retention schedules and redaction procedures; clarify how FOI requests will be handled.
  5. Human‑in‑the‑loop mandates (immediate)
    • For any output that informs decisions, legal filings or public communications, require named human attestation verifying accuracy and provenance. Courts and law societies are already moving in this direction.
  6. Workforce and union consultation (30–90 days)
    • Negotiate with unions to define where AI can augment work and create reskilling or role‑reprofiling plans for tasks that might change. Make use of pilot evaluation KPIs to demonstrate real benefits and trade‑offs.
  7. Public transparency (90 days)
    • Publish a plain‑language assurance statement for the public explaining where AI is used in direct service delivery, and a channel to request human review of any AI‑assisted decision.
  8. Ongoing monitoring and red‑teaming (120 days+)
    • Fund external red‑team audits for high‑risk models and schedule regular incident reporting and public summaries so the public can see governance in action.

A staged model the GNWT can follow (low technical overhead, high governance value)​

  • Stage 0 — Contain shadow AI: Block public consumer endpoints from government networks and provide sanctioned, tenant‑managed alternatives. Make licences conditional on training and managerial approval.
  • Stage 1 — Sanction low‑risk pilots: Allow summarisation, transcription and first‑draft assistance for low‑sensitivity work with mandatory human verification and prompt logging.
  • Stage 2 — Govern medium‑risk use cases: Require DPIAs, retention policies and procurement addenda for any use of AI in services that affect rights, finances or health.
  • Stage 3 — High‑risk restriction and external audit: Restrict or forbid unsupervised AI use in adjudication, licensing, entitlement determinations and Indigenous cultural governance unless a rigorous impact assessment and third‑party audit clear the application.
This staged approach gives the GNWT a place to start while retaining the ability to scale governance as adoption expands.

Where the GNWT’s public statements should be verified (claims to treat as provisional untipilot version available to GNWT employees is secure and information does not leave government servers.” This is a vendor‑architecture claim that should be confirmed by an independent tenant audit and by contract clauses that guarantee non‑training and deletion rights. The GNWT says it is conducting legal reviews of vendor terms; that review should be summarised publicly.​

  • “Existing records, privacy and security policies are adequate.” This is a normative statement; its validity depends on whether those policies explicitly cover AI artifacts (prompts, agent outputs, retriever corpora) and whether they have been updated to reflect AI use cases. Independent DPIAs are the right verification step.
Flag: until tenant audits and procurement clauses are produced and independent DPIAs are published, both claims should be treated as provisional.

What success looks like — measurable KPIs GNWT should publish​

  • Time saved per pilot task (baseline vs. AI‑assisted).
  • Percentage of AI‑assisted outputs requiring substantive human edits.
  • Number of incidents where AI surfaced incorrectly classified or sensitive documents.
  • Volume and retention status of prompt logs, with proportion redacted for PII.
  • Number of DPIAs completed and their risk-tier outcomes.
  • KPIs on carbon intensity per AI request for public reporting on environmental impact.
Measuring both productivity and harm metrics will allow GNWT to justify scaling or tightening controls. Practical experience shows organisations that track both benefits and errors can make evidence‑based policy decisions instead of reacting to crises.

Final assessment and recommendations​

The GNWT’s guideline is a reasonable starting signal that the government recognises generative AI as a policy area. It is also an insufficient operational framework for a government that handles sensitive personal, cultural and legal information. Without clearer vendor‑level contract protections, tenant audits, DPIAs, a model registry and explicit human‑in‑the‑loop and records rules, the GNWT risks incidents that could damage public trust and create legal or operational exposure.
Key next steps for GNWT leaders:
  • Publish the results of an independent tenant audit and a summary of the legal review of Copilot terms.
  • Require department‑level DPIAs for all AI pilots and establish a central registry with named owners and risk tiers.
  • Insert enforceable procurement clauses (non‑training, deletion and audit rights) into all AI contracts.
  • Negotiate workforce and union agreements on AI augmentation and role redesign.
  • Commit to public transparency: publish an annual AI usage and incident summary.
The danger is not the technology itself but the lack of governance when that technology touches government records, court filings, emergency communications and culturally sensitive services. GNWT can keep its agile, guideline‑first posture — but only if it pairs that posture with hard, verifiable controls and independent assurance. Short of that, the promise of productivity will likely be eclipsed by the costs of surprise incidents, eroded public trust and preventable legal exposure.
The GNWT’s next choices will determine whether the territory harnesses AI for public‑service gain or becomes a case study in how ungoverned generative systems can create outsized problems for small administrations with limited technical capacity. The technical and policy fixes are well‑known; the political and budget decisions to implement them are the real test.

Source: cabinradio.ca NWT government has no plan to develop AI policy
 

The Northwest Territories government says it has no plans to create a standalone AI policy for the public service, relying instead on a high‑level generative AI guideline released in May 2025 and existing information‑management rules — a stance that has prompted praise for caution from some quarters and sharp criticism from privacy, legal and labour experts who say the guideline leaves too many operational and accountability gaps.

Background​

The Government of the Northwest Territories (GNWT) published a high‑level guideline on the use of generative artificial intelligence in May 2025, and the Department of Finance reports the public service has access to an internal “AI Hub” offering the guideline, general training on generative AI and Microsoft Copilot training for employees. The finance minister has publicly described existing cybersecurity and information‑management arrangements as robust enough that the GNWT will not develop a separate, standalone AI policy. The guideline sets out broad expectations: establish rules and responsibilities for generative AI, protect data and outputs, explain why and when AI is used, and monitor AI deployments. It points users to federal guidance on generative AI and to existing GNWT policies on privacy and records handling. At the same time, the GNWT says it has not carried out its own privacy impact assessments for AI tools; instead, it reports it has leveraged assessments from other jurisdictions and is conducting legal reviews of vendor terms for common tools such as Microsoft Copilot. This approach — a minimalist, guideline‑first posture that leans on existing rules rather than a bespoke, enforceable AI policy — is increasingly common among smaller governments and public organisations that want to enable productivity gains while limiting procurement costs and administrative overhead. Yet the practical consequences of that posture are what has triggered debate in Yellowknife and among legal, privacy and labour observers.

Why this story matters to IT leaders and public servants​

Generative AI is no longer experimental. Tools such as ChatGPT, DALL·E, Claude and Microsoft Copilot are being used across governments to draft text, summarise meetings, help with code and even create images and transcripts. They offer measurable productivity gains, but they also introduce systemic risks: leakage of sensitive data, fabricated or “hallucinated” citations and facts, biased outputs, environmental cost, and new legal and records‑management headaches.
The GNWT’s decision to avoid a formal AI policy means the territory’s approach to those risks will be shaped by:
  • How strictly existing privacy, cybersecurity and records rules are enforced in AI use cases.
  • Whether vendor contracts include enforceable protections such as non‑training clauses, deletion and audit rights.
  • Whether tenant and connector settings for productivity assistants (e.g., Copilot) have been independently audited.
  • How the GNWT handles transparency and provenance for AI‑assisted outputs that feed into decisions affecting the public.
Cabin Radio’s reporting captures both the GNWT’s position and the pushback from experts worried the guideline is insufficient for operational risk management.

What the GNWT guideline actually says — and what it doesn’t​

What it says​

The GNWT guideline is a short, high‑level document that encourages:
  • Clear roles and responsibilities for generative AI use;
  • Safeguards to protect data and manage risks;
  • Transparency about why, how and when generative AI is used;
  • Ongoing monitoring of generative AI programs.
It explicitly links to the federal government’s guide on generative AI, signalling that the territory intends to follow national best practices rather than create its own regulatory architecture. The finance department says employees have access to training materials via an internal AI Hub.

What it omits or leaves vague​

  • No binding rules on which tools are approved versus prohibited for official purposes.
  • No published list of vetted AI vendors or models, nor an internal model registry.
  • No public statement of technical controls required before deploying AI (e.g., DLP rules, connector allow‑lists, prompt logging).
  • No published Data Protection Impact Assessments (DPIAs) for AI pilots or deployable tools.
  • No clear chain of responsibility for approvals, oversight or incident response.
  • No explicit policy on whether AI‑generated outputs and prompts are official records and how they will be retained or redacted.
Legal and privacy experts quoted in the coverage say the guideline reads as a statement of intent rather than enforceable governance and that the GNWT needs more specific policies tailored to different risk tiers and use cases.

Independent context: what other governments and regulators are doing​

Several Canadian jurisdictions and professional regulators have moved from guidance to enforceable or semi‑enforceable rules in 2024–2025. The federal government released a Guide on the Use of Generative AI in 2025 that details risk‑management expectations for custody of data and human verification. Other provinces and agencies — including British Columbia’s generative AI policy and multiple courts and law societies — have published rules that require disclosure, human verification and records‑management practices when AI helps produce files that go into official processes. Notable trends emerging from other public‑sector responses:
  • Mandatory human‑in‑the‑loop verification when AI outputs affect entitlements, decisions or legal filings.
  • Requirement to disclose AI assistance in court filings or professional submissions.
  • Tenant‑level audits (Purview/DLP) and procurement clauses demanding non‑training, deletion and audit rights from vendors.
  • Publication of a project registry for AI use cases with risk tiers (low/medium/high) so that high‑risk projects must pass impact assessments and third‑party review before deployment.
These operational controls are not theoretical: experience in other governments shows that pilot programs but also reveal misconfigurations that lead to sensitive‑data exposure, indexing of internal documents, and increased verification overhead. Independent analyses recommend technical readiness checks, procurement clauses and prompt‑and‑response logging as near‑universal first steps.

Risks the GNWT guideline leaves open — concrete examples​

Data leakage through connectors and mis‑classification​

Enterprise AI assistants can index repositories and surface documents across the organisation. If document classification is inconsistent, an assistant may surface sensitive material to people without proper clearance. Multiple assessments of public‑sector pilots have found this exact failure mode: indexing plus poor connector hygiene equals leakage. Proper controls require tenant audits and connector allow‑lists before broad rollout.

Hallucinations and fabricated citations​

Generative models can invent plausible but false facts and citations. Public examples from 2025 and 2026 show the risk is real and costly: government‑commissioned reports in other provinces have contained fabricated sources that appear to have been generated with AI, forcing retractions and reviews. Those incidents underscore the need for mandatory provenance checks and bibliographic verification for outputs used in policy documents.

Misinformation during emergencies​

AI‑generated imagery can spread quickly on social networks and worsen crisis responses. NWT fire officials publicly condemned an AI‑generated image of a wildfire outside Fort Providence that circulated online as “sensationalized slop,” demonstrating how generative visuals can inflame public fear and confuse emergency communications. Governments must incorporate misinformation response into their AI governance and public communications playbooks.

Impacts on Indigenousenous and culturally‑sensitive services​

The GNWT serves a disproportionately Indigenous population. Using AI to draft culturally sensitive communications or to make decisions affecting Indigenous communities raises particular risks — from misrepresentation to the inappropriate handling of cultural or sacred information. Experts argue for explicit consultation and cultural‑sensitivity rules before AI is used in these domains.

Labour and operational impacts​

Unions have raised concerns about AI replacing bargaining‑unit labour or being used to fill vacancies. There are also questions about how AI errors figure into performance reviews and accountability. Responsible adoption requires negotiated workforce plans, reskilling budgets and clarity about what work remains human.

Strengths in the GNWT approach — what the guideline does well​

  • It signals awareness rather than denial. The GNWT has acknowledged generative AI as a material governancut basic principles that link to federal guidance. This cautious, principle‑based stance can reduce knee‑jerk blanket bans that drive staff to unapproved consumer tools.
  • Training and an internal AI Hub are positive starting points. Making d centralising resources helps reduce “shadow AI” — staff experimenting with consumer models on personal devices when no sanctioned alternatives exist.
  • Lean governance can be nimble. A short, high‑level guideline is easier to update than a heavyweight statute or hard rule, allowing the GNWT to adapt as vendor features and threat models evolve.
These strengths matter for an administration with limited procurement and technical capacity; they create room to iterate rather than lock the government into a prematurely rigid regime.

Why those strengths aren’t enough — gaps that should concern officials and the public​

  • Principle without operational gatekeeping is porous. High‑level guidance only reduces risk if it is backed by technical gatekeeping — approved tools, tenant audits, DLP rules, connector controls and immutable logging of prompts and outputs. Experience from other governments shows these are non‑negotiable first steps before broad Copilot‑style rollouts.
  • Reliance on vend contract teeth is risky. Public statements that a vendor’s enterprise Copilot “does not leave the government server” must be verified in procurement documents: non‑training clauses, deletion guarantees, telemetry export and audit rights are essential contractual protmust not rely on marketing claims alone.
  • The guideline does not substitute for impact assessments. DPIAs and records‑management mapping should be mandatory for any AI project that touches personal data or decision‑facing outputs. Without them, the GNWT risks future Freedom‑of‑Information surprises and legal exposure.
  • No public transparency roadmap. For public trust, the GNWT should publish where and how AI is used in the public service, and explain who is responsible when AI contributes to decisions that affect citizens.

Practical, actionable checklist for GNWT leaders (technical and policy priorities)​

The GNWT can keep the present guideline but must rapidly operationalise it. The following checklist is ordered and practical — start with the first items within 30–90 days.
  1. Technical readiness and tenant audit (30 days)
    • Commission an independent audit of Microsoft tenant settings (Purview, DLP, connector permissions, retention and prompt logging) and publish a redaction and remediation plan.
  2. Procurement and contract protections (30–60 days)
    • Amend AI‑related procurement templates to demand: non‑training clauses, deletion/export rights, audit and telemetry access, data‑residency guarantees (where required), and clear breach‑notification SLAs.
  3. Project registry and risk tiering (60 days)
    AI registry where departments declare use cases, data inputs, vendor contracts and a risk tier. Require a DPIA and third‑party review for high‑risk projects (legal, health, Indigenous services, emergency response).
  4. Records and FOI policy (60–90 days)
    • Define whether prompts, outputs and inare official records; set retention schedules and redaction procedures; clarify how FOI requests will be handled.
  5. Human‑in‑the‑loop mandates (immediate)
    • For any output that informs decisions, legal filings or public communications, require named human attestation verifying accuracy and provenance. Courts and law societies are already moving in this direction.
  6. Workforce and union consultation (30–90 days)
    • Negotiate with unions to define where AI can augment work and create reskilling or role‑reprofiling plans for tasks that might change. Make use of pilot evaluation KPIs to demonstrate real benefits and trade‑offs.
  7. Public transparency (90 days)
    • Publish a plain‑language assurance statement for the public explaining where AI is used in direct service delivery, and a channel to request human review of any AI‑assisted decision.
  8. Ongoing monitoring and red‑teaming (120 days+)
    • Fund external red‑team audits for high‑risk models and schedule regular incident reporting and public summaries so the public can see governance in action.

A staged model the GNWT can follow (low technical overhead, high governance value)​

  • Stage 0 — Contain shadow AI: Block public consumer endpoints from government networks and provide sanctioned, tenant‑managed alternatives. Make licences conditional on training and managerial approval.
  • Stage 1 — Sanction low‑risk pilots: Allow summarisation, transcription and first‑draft assistance for low‑sensitivity work with mandatory human verification and prompt logging.
  • Stage 2 — Govern medium‑risk use cases: Require DPIAs, retention policies and procurement addenda for any use of AI in services that affect rights, finances or health.
  • Stage 3 — High‑risk restriction and external audit: Restrict or forbid unsupervised AI use in adjudication, licensing, entitlement determinations and Indigenous cultural governance unless a rigorous impact assessment and third‑party audit clear the application.
This staged approach gives the GNWT a place to start while retaining the ability to scale governance as adoption expands.

Where the GNWT’s public statements should be verified (claims to treat as provisional untipilot version available to GNWT employees is secure and information does not leave government servers.” This is a vendor‑architecture claim that should be confirmed by an independent tenant audit and by contract clauses that guarantee non‑training and deletion rights. The GNWT says it is conducting legal reviews of vendor terms; that review should be summarised publicly.​

  • “Existing records, privacy and security policies are adequate.” This is a normative statement; its validity depends on whether those policies explicitly cover AI artifacts (prompts, agent outputs, retriever corpora) and whether they have been updated to reflect AI use cases. Independent DPIAs are the right verification step.
Flag: until tenant audits and procurement clauses are produced and independent DPIAs are published, both claims should be treated as provisional.

What success looks like — measurable KPIs GNWT should publish​

  • Time saved per pilot task (baseline vs. AI‑assisted).
  • Percentage of AI‑assisted outputs requiring substantive human edits.
  • Number of incidents where AI surfaced incorrectly classified or sensitive documents.
  • Volume and retention status of prompt logs, with proportion redacted for PII.
  • Number of DPIAs completed and their risk-tier outcomes.
  • KPIs on carbon intensity per AI request for public reporting on environmental impact.
Measuring both productivity and harm metrics will allow GNWT to justify scaling or tightening controls. Practical experience shows organisations that track both benefits and errors can make evidence‑based policy decisions instead of reacting to crises.

Final assessment and recommendations​

The GNWT’s guideline is a reasonable starting signal that the government recognises generative AI as a policy area. It is also an insufficient operational framework for a government that handles sensitive personal, cultural and legal information. Without clearer vendor‑level contract protections, tenant audits, DPIAs, a model registry and explicit human‑in‑the‑loop and records rules, the GNWT risks incidents that could damage public trust and create legal or operational exposure.
Key next steps for GNWT leaders:
  • Publish the results of an independent tenant audit and a summary of the legal review of Copilot terms.
  • Require department‑level DPIAs for all AI pilots and establish a central registry with named owners and risk tiers.
  • Insert enforceable procurement clauses (non‑training, deletion and audit rights) into all AI contracts.
  • Negotiate workforce and union agreements on AI augmentation and role redesign.
  • Commit to public transparency: publish an annual AI usage and incident summary.
The danger is not the technology itself but the lack of governance when that technology touches government records, court filings, emergency communications and culturally sensitive services. GNWT can keep its agile, guideline‑first posture — but only if it pairs that posture with hard, verifiable controls and independent assurance. Short of that, the promise of productivity will likely be eclipsed by the costs of surprise incidents, eroded public trust and preventable legal exposure.
The GNWT’s next choices will determine whether the territory harnesses AI for public‑service gain or becomes a case study in how ungoverned generative systems can create outsized problems for small administrations with limited technical capacity. The technical and policy fixes are well‑known; the political and budget decisions to implement them are the real test.

Source: cabinradio.ca NWT government has no plan to develop AI policy
 

The Northwest Territories government says it will not create a standalone artificial intelligence policy for the public service, relying instead on a high‑level generative AI guideline released in May 2025 and existing information‑management rules — a choice that places the GNWT inside a growing policy debate about whether high‑level guidance is an adequate response to the operational, privacy and accountability risks posed by generative AI.

Background​

Canada’s territorial and provincial governments, like many public sector organizations worldwide, are testing and deploying generative AI tools — from document‑drafting assistants to transcription aides — while grappling with questions about privacy, procurement, records management and workforce impacts. The GNWT’s guideline, the Department of Finance’s “AI Hub” training and an enterprise deployment of Microsoft Copilot are the territory’s present measures. GNWT leaders argue those measures, plus existing privacy and information‑management policies, are sufficient for now. At the same time, experts and some public‑service employees say a guideline is not a substitute for policy: it lacks enforceable rules, clear lines of responsibility, risk‑tiered approvals for tool use, and mechanisms to capture AI prompts and outputs as part of public records. The vulnerability is concrete: several high‑profile instances outside the NWT have shown how generative systems can inject false citations, hallucinate evidence or expose sensitive data if governance is immature.

What the GNWT has done — and what it has not​

The guideline and the “AI Hub”​

  • In May 2025 the GNWT published a high‑level guideline for generative AI that recommends establishing roles and responsibilities, putting safeguards in place, providing transparency about use, and monitoring programs. The guideline points to federal guidance and existing territorial policies on privacy and information handling. GNWT employees reportedly have access to an internal “AI Hub” containing the guideline, general generative AI training, and Microsoft Copilot training.
  • The Finance Department says it has leveraged privacy impact assessments performed by other jurisdictions and is conducting legal reviews of common AI tools’ terms of use, including Microsoft Copilot. The GNWT has not, however, produced its own territorial privacy impact assessment for AI tools.

No standalone AI policy — an explicit choice​

Finance Minister Caroline Wawzonek told local reporters the GNWT is “not going to add more to the existing policies that are there,” and that the public should have “a pretty high level of confidence” in the government’s cybersecurity and information protection. The government frames its approach as a measured reliance on existing controls plus a guideline rather than a new, separate regulatory framework.

Why experts say a guideline falls short​

Lack of operational rules and enforceable controls​

Legal and privacy experts argue that a high‑level guideline without operational policy and technical enforcement leaves too much discretion to individual employees and departments. A robust AI governance program typically includes:
  • DPIAs (data protection / privacy impact assessments) tailored to AI‑enabled services.
  • A project registry and risk‑tiering for AI projects (explicitly defining which uses are low, medium or high risk).
  • Mandatory approval gates for any AI tool that will process personal, health, culturally sensitive, or classified information.
  • Procurement clauses that explicitly limit vendor use of government data for external model training and require audit righthts.
The NWT’s privacy commissioner welcomed the guideline as a first step but said it reads more as a statement of intent than a set of applicable policies for specific AI applications. That view is echoed by academic experts who say the guideline mixes departmental uses and ad‑hoc employee experimentation without clarifying who must vet and approve tools before they enter official workflows.

Chain of responsibility and accountability missing​

A durable policy must assign responsibility: who decides which AI models are approved; who signs off on DPIAs; who is accountable when an AI‑assisted product (for example, a court transcript or a medical note) contains errors; and how human sign‑off is recorded. The GNWT guideline currently lacks these operational assignments, creating potential ambiguity in both normal operations and incident response.

Cultural and Indigenous data concerns​

Experts urged the GNWT to explicitly address culturally sensitive information and Indigenous data sovereignty in any AI policy. AI systems can inadvertently surface or misuse culturally restricted knowledge; consultation and consent mechanisms with Indigenous communities are essential design elements of an ethical public‑sector AI policy. The guideline does not prescribe engagement processes or cultural review boards for AI projects.

The operational and technical reality: Copilot as a case study​

The GNWT has provisioned a version of Microsoft Copilot for employees and told reporters that the available instance is “secure” and that information “does not leave the government’s server system.” Microsoft’s enterprise documentation supports this general model: Copilot for Microsoft 365 is designed to respect tenant boundaries and Microsoft states that organizational data processed in enterprise tenants is not used to train Microsoft’s foundational models unless specific opt‑in arrangements exist. At the same time, independent technical reviews and vendor analyses emphasize that tenant configuration, connector rules, Purview classification, and DLP controls must be actively configured and audited by the tenant. Simply enabling Copilot without those technical and process controls can expose sensitive material. Key verifiable points about enterprise Copilot:
  • Microsoft’s documentation says prompts and responses in enterprise Copilot are not used to train foundation LLMs and that tenant admin controls, Purview, and encryption protect data. Customers are the primary controllers of their data flows.
  • Microsoft also documents which user categories and scenarios are excluded from training and offers opt‑out choices; however, the protections are conditional on proper tenant setup and subscription features. Independent security writeups note that some telemetry and metadata may still be used to improve services, so tenant administrators must understand and configure options properly.
For the GNWT, the practical implication is clear: claiming that “Copilot is secure” is accurate only if the tenant has implemented identity controls, Purview sensitivity labels, connector allow‑lists, DLP, and audited retention of prompts and outputs. Those are implementable, but they require technical skills, monitoring, and a written policy that ties configuration to business rules.

Real‑world harms that underpin the caution​

Generative AI’s propensity for hallucinations and fabricated citations is not hypothetical. profile public documents in other Canadian jurisdictions were found to contain fabricated references and erroneous citations — errors likely produced or enabled by generative AI — which eroded public trust and triggered formal reviews. Those episodes show the reputational and legal fallout when verification practices are weak. Other operational harms public bodies must plan for include:
  • Unintentional disclosure of personal health or justice information through poorly configured connectors or indexed repositories.
  • Hallucinated legal or evidentiary claims that enter the public record and affect decisions.
  • Erosion of employee skills if staff over‑rely on AI drafts without critical appraisal.
  • Environmental and procurement impacts arising from third‑party hosting and compute choices.

Balanced view: benefits the GNWT is rightly pursuing​

Despite the risks, the GNWT’s experiments show why governments are eager to adopt AI:
  • Operational productivity: transcription, summarization, routine drafting and administrative automation can free clinicians, judges, and bureaucrats from repetitive work and reduce backlogs. GNWT pilots — for medical note‑taking and court transcript rough drafts — mirror international findings that certain tasks see measurable time savings under controlled deployments.
  • Consistency and accessibility: AI can help standardize communications, produce draft replies, and make services more accessible to residents with disabilities (for example, live transcription). When deployed with human verification, these tools can raise baseline service quality.
  • Training and uptake: making a sanctioned, tenant‑bound assistant available and pairing it with training reduces shadow AI (staff turning to consumer chatbots for official work), which is a practical harm‑reduction strategy.
The GNWT’s current approach — centralized training via an “AI Hub” and a tenant‑bound Copilot instance — is consistent with early best practice steps when organizations are still defining risk appetite and building capability.

What good policy would add — a practical checklist for the GNWT​

The following is a prioritized, operational checklist that converts a high‑level guideline into enforceable policy and practice.
  1. Establish governance and roles
    1. Appoint a cross‑government AI governance lead (or Chief AI Officer) with budget and authority to approve or reject AI use cases.
    2. Create departmental AI stewards responsible for vetting projects, DPIAs, and training completion.
  2. Risk‑tiered approvals
    1. Define low/medium/high risk categories (e.g., public communications vs. health records vs. legal documents).
    2. Require DPIAs for any medium/high‑risk project and independent technical review for high‑risk projects before authorization.
  3. Technical controls and procurement
    • Enforce tenant settings: Purview sensitivity labels, DLP, connector allow‑lists, and conditional access. Configure Double Key Encryption or equivalent for the most sensitive datasets where feasible.
    • Embed procurement clauses: non‑training guarantees, clear data‑residency commitments, deletion rights, audit access and telemetry limits.
  4. Records, retention and transparency
    • Treat prompts and AI outputs that inform official documents as potential public records; define retention windows, redaction workflows and FOI processes.
    • Publish an AI use transparency statement: list classes of decisions where AI is used and describe human oversight mechanisms.
  5. Human‑in‑the‑loop and training
    • Mandate sign‑off and attestation for any AI‑assisted output used externally or in decision‑making.
    • Make role‑based training and prompt‑validation certification prerequisites for access to enterprise Copilot or other sanctioned assistants.
  6. Audit, incident response and independent review
    • Log immutable prompt/response records and integrate them into audit and SIEM processes.
    • Schedule periodic third‑party red‑team testing and privacy/security audits.
    • Establish an incident response playbook specifically for AI‑related leaks, hallucinations and reputational incidents.
  7. Indigenous engagement and cultural review
    • Create consultation protocols, informed consent processes, and cultural review boards for any project that touches Indigenous or culturally sensitive data.
  8. Environmental and reporting measures
    • Track AI‑related compute emissions and include them in greenhouse gas reporting where material.
    • Prefer smaller task‑specific models or retrieval‑augmented pipelines where they meet business needs and reduce carbon intensity.

Governance examples the GNWT can adapt​

Several public sectors have combined tenant‑bound assistants with enforceable policy:
  • Some jurisdictions require mandatory DPIAs and central registries for AI projects, plus statutory duties of care for automated decisions.
  • Municipalities that have gone further restrict AI use to approved, enterprise‑grade tools while temporarily blocking consumer models for official work until policies and technical DLP rules are in place. These approaches reduce shadow AI while the governance framework is built.
A blended approach — maintain the GNWT’s tenant‑bound Copilot and AI Hub while rapidly building the operational policy elements above — would be a pragmatic next step.

What to watch next — risk indicators and public‑confidence signals​

  • Does the GNWT publish DPIAs for its pilots (medical note‑taking, court transcripts)? Public DPIAs materially increase trust.
  • Are prompts and outputs from Copilot and other sanctioned tools captured, retained and auditable under existing records rules?
  • Are procurement contracts updated to include non‑training clauses and explicit audit rights?
  • Is there a named, resourced governance lead with authority across departments to approve high‑risk use cases?
  • Are Indigenous communities, unions, and frontline workers meaningfully consulted and included in governance decisions?
When these measures are in place and publicly reported, a guideline matures into a defensible policy regime; without them, the territory remains exposed to preventable privacy, legal and reputational incidents.

Conclusion: a pragmatic path from guidance to governance​

The GNWT’s decision to rely on a high‑level guideline and existing policies instead of producing a standalone AI policy has defensible motives: speed of adoption, leveraging vendor enterprise protections, and avoiding regulatory duplication. Yet high‑level guidance without operational policy risks implementation drift — where different departments adopt different practices, sensitive data slips through connectors, and AI outputs enter the public record without provenance or verification.
A responsible middle path is available. Keep the GNWT’s enterprise Copilot and AI Hub to reduce shadow AI and accelerate literacy, but simultaneously adopt the operational practices outlined above: formal DPIAs, risk‑tiered approvals, procurement safeguards, human‑in‑the‑loop rules, and explicit Indigenous engagement. Those steps convert a plausible guideline into an accountable, auditable public‑sector AI governance program capable of delivering productivity gains while protecting privacy, equity and public trust.
Source: cabinradio.ca NWT government has no plan to develop AI policy
 

The Northwest Territories government says it will not create a standalone artificial intelligence policy for the public service, relying instead on a high‑level generative AI guideline released in May 2025 and existing information‑management rules — a choice that places the GNWT inside a growing policy debate about whether high‑level guidance is an adequate response to the operational, privacy and accountability risks posed by generative AI.

Background​

Canada’s territorial and provincial governments, like many public sector organizations worldwide, are testing and deploying generative AI tools — from document‑drafting assistants to transcription aides — while grappling with questions about privacy, procurement, records management and workforce impacts. The GNWT’s guideline, the Department of Finance’s “AI Hub” training and an enterprise deployment of Microsoft Copilot are the territory’s present measures. GNWT leaders argue those measures, plus existing privacy and information‑management policies, are sufficient for now. At the same time, experts and some public‑service employees say a guideline is not a substitute for policy: it lacks enforceable rules, clear lines of responsibility, risk‑tiered approvals for tool use, and mechanisms to capture AI prompts and outputs as part of public records. The vulnerability is concrete: several high‑profile instances outside the NWT have shown how generative systems can inject false citations, hallucinate evidence or expose sensitive data if governance is immature.

What the GNWT has done — and what it has not​

The guideline and the “AI Hub”​

  • In May 2025 the GNWT published a high‑level guideline for generative AI that recommends establishing roles and responsibilities, putting safeguards in place, providing transparency about use, and monitoring programs. The guideline points to federal guidance and existing territorial policies on privacy and information handling. GNWT employees reportedly have access to an internal “AI Hub” containing the guideline, general generative AI training, and Microsoft Copilot training.
  • The Finance Department says it has leveraged privacy impact assessments performed by other jurisdictions and is conducting legal reviews of common AI tools’ terms of use, including Microsoft Copilot. The GNWT has not, however, produced its own territorial privacy impact assessment for AI tools.

No standalone AI policy — an explicit choice​

Finance Minister Caroline Wawzonek told local reporters the GNWT is “not going to add more to the existing policies that are there,” and that the public should have “a pretty high level of confidence” in the government’s cybersecurity and information protection. The government frames its approach as a measured reliance on existing controls plus a guideline rather than a new, separate regulatory framework.

Why experts say a guideline falls short​

Lack of operational rules and enforceable controls​

Legal and privacy experts argue that a high‑level guideline without operational policy and technical enforcement leaves too much discretion to individual employees and departments. A robust AI governance program typically includes:
  • DPIAs (data protection / privacy impact assessments) tailored to AI‑enabled services.
  • A project registry and risk‑tiering for AI projects (explicitly defining which uses are low, medium or high risk).
  • Mandatory approval gates for any AI tool that will process personal, health, culturally sensitive, or classified information.
  • Procurement clauses that explicitly limit vendor use of government data for external model training and require audit righthts.
The NWT’s privacy commissioner welcomed the guideline as a first step but said it reads more as a statement of intent than a set of applicable policies for specific AI applications. That view is echoed by academic experts who say the guideline mixes departmental uses and ad‑hoc employee experimentation without clarifying who must vet and approve tools before they enter official workflows.

Chain of responsibility and accountability missing​

A durable policy must assign responsibility: who decides which AI models are approved; who signs off on DPIAs; who is accountable when an AI‑assisted product (for example, a court transcript or a medical note) contains errors; and how human sign‑off is recorded. The GNWT guideline currently lacks these operational assignments, creating potential ambiguity in both normal operations and incident response.

Cultural and Indigenous data concerns​

Experts urged the GNWT to explicitly address culturally sensitive information and Indigenous data sovereignty in any AI policy. AI systems can inadvertently surface or misuse culturally restricted knowledge; consultation and consent mechanisms with Indigenous communities are essential design elements of an ethical public‑sector AI policy. The guideline does not prescribe engagement processes or cultural review boards for AI projects.

The operational and technical reality: Copilot as a case study​

The GNWT has provisioned a version of Microsoft Copilot for employees and told reporters that the available instance is “secure” and that information “does not leave the government’s server system.” Microsoft’s enterprise documentation supports this general model: Copilot for Microsoft 365 is designed to respect tenant boundaries and Microsoft states that organizational data processed in enterprise tenants is not used to train Microsoft’s foundational models unless specific opt‑in arrangements exist. At the same time, independent technical reviews and vendor analyses emphasize that tenant configuration, connector rules, Purview classification, and DLP controls must be actively configured and audited by the tenant. Simply enabling Copilot without those technical and process controls can expose sensitive material. Key verifiable points about enterprise Copilot:
  • Microsoft’s documentation says prompts and responses in enterprise Copilot are not used to train foundation LLMs and that tenant admin controls, Purview, and encryption protect data. Customers are the primary controllers of their data flows.
  • Microsoft also documents which user categories and scenarios are excluded from training and offers opt‑out choices; however, the protections are conditional on proper tenant setup and subscription features. Independent security writeups note that some telemetry and metadata may still be used to improve services, so tenant administrators must understand and configure options properly.
For the GNWT, the practical implication is clear: claiming that “Copilot is secure” is accurate only if the tenant has implemented identity controls, Purview sensitivity labels, connector allow‑lists, DLP, and audited retention of prompts and outputs. Those are implementable, but they require technical skills, monitoring, and a written policy that ties configuration to business rules.

Real‑world harms that underpin the caution​

Generative AI’s propensity for hallucinations and fabricated citations is not hypothetical. profile public documents in other Canadian jurisdictions were found to contain fabricated references and erroneous citations — errors likely produced or enabled by generative AI — which eroded public trust and triggered formal reviews. Those episodes show the reputational and legal fallout when verification practices are weak. Other operational harms public bodies must plan for include:
  • Unintentional disclosure of personal health or justice information through poorly configured connectors or indexed repositories.
  • Hallucinated legal or evidentiary claims that enter the public record and affect decisions.
  • Erosion of employee skills if staff over‑rely on AI drafts without critical appraisal.
  • Environmental and procurement impacts arising from third‑party hosting and compute choices.

Balanced view: benefits the GNWT is rightly pursuing​

Despite the risks, the GNWT’s experiments show why governments are eager to adopt AI:
  • Operational productivity: transcription, summarization, routine drafting and administrative automation can free clinicians, judges, and bureaucrats from repetitive work and reduce backlogs. GNWT pilots — for medical note‑taking and court transcript rough drafts — mirror international findings that certain tasks see measurable time savings under controlled deployments.
  • Consistency and accessibility: AI can help standardize communications, produce draft replies, and make services more accessible to residents with disabilities (for example, live transcription). When deployed with human verification, these tools can raise baseline service quality.
  • Training and uptake: making a sanctioned, tenant‑bound assistant available and pairing it with training reduces shadow AI (staff turning to consumer chatbots for official work), which is a practical harm‑reduction strategy.
The GNWT’s current approach — centralized training via an “AI Hub” and a tenant‑bound Copilot instance — is consistent with early best practice steps when organizations are still defining risk appetite and building capability.

What good policy would add — a practical checklist for the GNWT​

The following is a prioritized, operational checklist that converts a high‑level guideline into enforceable policy and practice.
  1. Establish governance and roles
    1. Appoint a cross‑government AI governance lead (or Chief AI Officer) with budget and authority to approve or reject AI use cases.
    2. Create departmental AI stewards responsible for vetting projects, DPIAs, and training completion.
  2. Risk‑tiered approvals
    1. Define low/medium/high risk categories (e.g., public communications vs. health records vs. legal documents).
    2. Require DPIAs for any medium/high‑risk project and independent technical review for high‑risk projects before authorization.
  3. Technical controls and procurement
    • Enforce tenant settings: Purview sensitivity labels, DLP, connector allow‑lists, and conditional access. Configure Double Key Encryption or equivalent for the most sensitive datasets where feasible.
    • Embed procurement clauses: non‑training guarantees, clear data‑residency commitments, deletion rights, audit access and telemetry limits.
  4. Records, retention and transparency
    • Treat prompts and AI outputs that inform official documents as potential public records; define retention windows, redaction workflows and FOI processes.
    • Publish an AI use transparency statement: list classes of decisions where AI is used and describe human oversight mechanisms.
  5. Human‑in‑the‑loop and training
    • Mandate sign‑off and attestation for any AI‑assisted output used externally or in decision‑making.
    • Make role‑based training and prompt‑validation certification prerequisites for access to enterprise Copilot or other sanctioned assistants.
  6. Audit, incident response and independent review
    • Log immutable prompt/response records and integrate them into audit and SIEM processes.
    • Schedule periodic third‑party red‑team testing and privacy/security audits.
    • Establish an incident response playbook specifically for AI‑related leaks, hallucinations and reputational incidents.
  7. Indigenous engagement and cultural review
    • Create consultation protocols, informed consent processes, and cultural review boards for any project that touches Indigenous or culturally sensitive data.
  8. Environmental and reporting measures
    • Track AI‑related compute emissions and include them in greenhouse gas reporting where material.
    • Prefer smaller task‑specific models or retrieval‑augmented pipelines where they meet business needs and reduce carbon intensity.

Governance examples the GNWT can adapt​

Several public sectors have combined tenant‑bound assistants with enforceable policy:
  • Some jurisdictions require mandatory DPIAs and central registries for AI projects, plus statutory duties of care for automated decisions.
  • Municipalities that have gone further restrict AI use to approved, enterprise‑grade tools while temporarily blocking consumer models for official work until policies and technical DLP rules are in place. These approaches reduce shadow AI while the governance framework is built.
A blended approach — maintain the GNWT’s tenant‑bound Copilot and AI Hub while rapidly building the operational policy elements above — would be a pragmatic next step.

What to watch next — risk indicators and public‑confidence signals​

  • Does the GNWT publish DPIAs for its pilots (medical note‑taking, court transcripts)? Public DPIAs materially increase trust.
  • Are prompts and outputs from Copilot and other sanctioned tools captured, retained and auditable under existing records rules?
  • Are procurement contracts updated to include non‑training clauses and explicit audit rights?
  • Is there a named, resourced governance lead with authority across departments to approve high‑risk use cases?
  • Are Indigenous communities, unions, and frontline workers meaningfully consulted and included in governance decisions?
When these measures are in place and publicly reported, a guideline matures into a defensible policy regime; without them, the territory remains exposed to preventable privacy, legal and reputational incidents.

Conclusion: a pragmatic path from guidance to governance​

The GNWT’s decision to rely on a high‑level guideline and existing policies instead of producing a standalone AI policy has defensible motives: speed of adoption, leveraging vendor enterprise protections, and avoiding regulatory duplication. Yet high‑level guidance without operational policy risks implementation drift — where different departments adopt different practices, sensitive data slips through connectors, and AI outputs enter the public record without provenance or verification.
A responsible middle path is available. Keep the GNWT’s enterprise Copilot and AI Hub to reduce shadow AI and accelerate literacy, but simultaneously adopt the operational practices outlined above: formal DPIAs, risk‑tiered approvals, procurement safeguards, human‑in‑the‑loop rules, and explicit Indigenous engagement. Those steps convert a plausible guideline into an accountable, auditable public‑sector AI governance program capable of delivering productivity gains while protecting privacy, equity and public trust.
Source: cabinradio.ca NWT government has no plan to develop AI policy
 

The Northwest Territories government says it will not create a standalone artificial intelligence policy for the public service, relying instead on a high‑level generative AI guideline released in May 2025 and existing information‑management rules — a choice that places the GNWT inside a growing policy debate about whether high‑level guidance is an adequate response to the operational, privacy and accountability risks posed by generative AI.

Background​

Canada’s territorial and provincial governments, like many public sector organizations worldwide, are testing and deploying generative AI tools — from document‑drafting assistants to transcription aides — while grappling with questions about privacy, procurement, records management and workforce impacts. The GNWT’s guideline, the Department of Finance’s “AI Hub” training and an enterprise deployment of Microsoft Copilot are the territory’s present measures. GNWT leaders argue those measures, plus existing privacy and information‑management policies, are sufficient for now. At the same time, experts and some public‑service employees say a guideline is not a substitute for policy: it lacks enforceable rules, clear lines of responsibility, risk‑tiered approvals for tool use, and mechanisms to capture AI prompts and outputs as part of public records. The vulnerability is concrete: several high‑profile instances outside the NWT have shown how generative systems can inject false citations, hallucinate evidence or expose sensitive data if governance is immature.

What the GNWT has done — and what it has not​

The guideline and the “AI Hub”​

  • In May 2025 the GNWT published a high‑level guideline for generative AI that recommends establishing roles and responsibilities, putting safeguards in place, providing transparency about use, and monitoring programs. The guideline points to federal guidance and existing territorial policies on privacy and information handling. GNWT employees reportedly have access to an internal “AI Hub” containing the guideline, general generative AI training, and Microsoft Copilot training.
  • The Finance Department says it has leveraged privacy impact assessments performed by other jurisdictions and is conducting legal reviews of common AI tools’ terms of use, including Microsoft Copilot. The GNWT has not, however, produced its own territorial privacy impact assessment for AI tools.

No standalone AI policy — an explicit choice​

Finance Minister Caroline Wawzonek told local reporters the GNWT is “not going to add more to the existing policies that are there,” and that the public should have “a pretty high level of confidence” in the government’s cybersecurity and information protection. The government frames its approach as a measured reliance on existing controls plus a guideline rather than a new, separate regulatory framework.

Why experts say a guideline falls short​

Lack of operational rules and enforceable controls​

Legal and privacy experts argue that a high‑level guideline without operational policy and technical enforcement leaves too much discretion to individual employees and departments. A robust AI governance program typically includes:
  • DPIAs (data protection / privacy impact assessments) tailored to AI‑enabled services.
  • A project registry and risk‑tiering for AI projects (explicitly defining which uses are low, medium or high risk).
  • Mandatory approval gates for any AI tool that will process personal, health, culturally sensitive, or classified information.
  • Procurement clauses that explicitly limit vendor use of government data for external model training and require audit righthts.
The NWT’s privacy commissioner welcomed the guideline as a first step but said it reads more as a statement of intent than a set of applicable policies for specific AI applications. That view is echoed by academic experts who say the guideline mixes departmental uses and ad‑hoc employee experimentation without clarifying who must vet and approve tools before they enter official workflows.

Chain of responsibility and accountability missing​

A durable policy must assign responsibility: who decides which AI models are approved; who signs off on DPIAs; who is accountable when an AI‑assisted product (for example, a court transcript or a medical note) contains errors; and how human sign‑off is recorded. The GNWT guideline currently lacks these operational assignments, creating potential ambiguity in both normal operations and incident response.

Cultural and Indigenous data concerns​

Experts urged the GNWT to explicitly address culturally sensitive information and Indigenous data sovereignty in any AI policy. AI systems can inadvertently surface or misuse culturally restricted knowledge; consultation and consent mechanisms with Indigenous communities are essential design elements of an ethical public‑sector AI policy. The guideline does not prescribe engagement processes or cultural review boards for AI projects.

The operational and technical reality: Copilot as a case study​

The GNWT has provisioned a version of Microsoft Copilot for employees and told reporters that the available instance is “secure” and that information “does not leave the government’s server system.” Microsoft’s enterprise documentation supports this general model: Copilot for Microsoft 365 is designed to respect tenant boundaries and Microsoft states that organizational data processed in enterprise tenants is not used to train Microsoft’s foundational models unless specific opt‑in arrangements exist. At the same time, independent technical reviews and vendor analyses emphasize that tenant configuration, connector rules, Purview classification, and DLP controls must be actively configured and audited by the tenant. Simply enabling Copilot without those technical and process controls can expose sensitive material. Key verifiable points about enterprise Copilot:
  • Microsoft’s documentation says prompts and responses in enterprise Copilot are not used to train foundation LLMs and that tenant admin controls, Purview, and encryption protect data. Customers are the primary controllers of their data flows.
  • Microsoft also documents which user categories and scenarios are excluded from training and offers opt‑out choices; however, the protections are conditional on proper tenant setup and subscription features. Independent security writeups note that some telemetry and metadata may still be used to improve services, so tenant administrators must understand and configure options properly.
For the GNWT, the practical implication is clear: claiming that “Copilot is secure” is accurate only if the tenant has implemented identity controls, Purview sensitivity labels, connector allow‑lists, DLP, and audited retention of prompts and outputs. Those are implementable, but they require technical skills, monitoring, and a written policy that ties configuration to business rules.

Real‑world harms that underpin the caution​

Generative AI’s propensity for hallucinations and fabricated citations is not hypothetical. profile public documents in other Canadian jurisdictions were found to contain fabricated references and erroneous citations — errors likely produced or enabled by generative AI — which eroded public trust and triggered formal reviews. Those episodes show the reputational and legal fallout when verification practices are weak. Other operational harms public bodies must plan for include:
  • Unintentional disclosure of personal health or justice information through poorly configured connectors or indexed repositories.
  • Hallucinated legal or evidentiary claims that enter the public record and affect decisions.
  • Erosion of employee skills if staff over‑rely on AI drafts without critical appraisal.
  • Environmental and procurement impacts arising from third‑party hosting and compute choices.

Balanced view: benefits the GNWT is rightly pursuing​

Despite the risks, the GNWT’s experiments show why governments are eager to adopt AI:
  • Operational productivity: transcription, summarization, routine drafting and administrative automation can free clinicians, judges, and bureaucrats from repetitive work and reduce backlogs. GNWT pilots — for medical note‑taking and court transcript rough drafts — mirror international findings that certain tasks see measurable time savings under controlled deployments.
  • Consistency and accessibility: AI can help standardize communications, produce draft replies, and make services more accessible to residents with disabilities (for example, live transcription). When deployed with human verification, these tools can raise baseline service quality.
  • Training and uptake: making a sanctioned, tenant‑bound assistant available and pairing it with training reduces shadow AI (staff turning to consumer chatbots for official work), which is a practical harm‑reduction strategy.
The GNWT’s current approach — centralized training via an “AI Hub” and a tenant‑bound Copilot instance — is consistent with early best practice steps when organizations are still defining risk appetite and building capability.

What good policy would add — a practical checklist for the GNWT​

The following is a prioritized, operational checklist that converts a high‑level guideline into enforceable policy and practice.
  1. Establish governance and roles
    1. Appoint a cross‑government AI governance lead (or Chief AI Officer) with budget and authority to approve or reject AI use cases.
    2. Create departmental AI stewards responsible for vetting projects, DPIAs, and training completion.
  2. Risk‑tiered approvals
    1. Define low/medium/high risk categories (e.g., public communications vs. health records vs. legal documents).
    2. Require DPIAs for any medium/high‑risk project and independent technical review for high‑risk projects before authorization.
  3. Technical controls and procurement
    • Enforce tenant settings: Purview sensitivity labels, DLP, connector allow‑lists, and conditional access. Configure Double Key Encryption or equivalent for the most sensitive datasets where feasible.
    • Embed procurement clauses: non‑training guarantees, clear data‑residency commitments, deletion rights, audit access and telemetry limits.
  4. Records, retention and transparency
    • Treat prompts and AI outputs that inform official documents as potential public records; define retention windows, redaction workflows and FOI processes.
    • Publish an AI use transparency statement: list classes of decisions where AI is used and describe human oversight mechanisms.
  5. Human‑in‑the‑loop and training
    • Mandate sign‑off and attestation for any AI‑assisted output used externally or in decision‑making.
    • Make role‑based training and prompt‑validation certification prerequisites for access to enterprise Copilot or other sanctioned assistants.
  6. Audit, incident response and independent review
    • Log immutable prompt/response records and integrate them into audit and SIEM processes.
    • Schedule periodic third‑party red‑team testing and privacy/security audits.
    • Establish an incident response playbook specifically for AI‑related leaks, hallucinations and reputational incidents.
  7. Indigenous engagement and cultural review
    • Create consultation protocols, informed consent processes, and cultural review boards for any project that touches Indigenous or culturally sensitive data.
  8. Environmental and reporting measures
    • Track AI‑related compute emissions and include them in greenhouse gas reporting where material.
    • Prefer smaller task‑specific models or retrieval‑augmented pipelines where they meet business needs and reduce carbon intensity.

Governance examples the GNWT can adapt​

Several public sectors have combined tenant‑bound assistants with enforceable policy:
  • Some jurisdictions require mandatory DPIAs and central registries for AI projects, plus statutory duties of care for automated decisions.
  • Municipalities that have gone further restrict AI use to approved, enterprise‑grade tools while temporarily blocking consumer models for official work until policies and technical DLP rules are in place. These approaches reduce shadow AI while the governance framework is built.
A blended approach — maintain the GNWT’s tenant‑bound Copilot and AI Hub while rapidly building the operational policy elements above — would be a pragmatic next step.

What to watch next — risk indicators and public‑confidence signals​

  • Does the GNWT publish DPIAs for its pilots (medical note‑taking, court transcripts)? Public DPIAs materially increase trust.
  • Are prompts and outputs from Copilot and other sanctioned tools captured, retained and auditable under existing records rules?
  • Are procurement contracts updated to include non‑training clauses and explicit audit rights?
  • Is there a named, resourced governance lead with authority across departments to approve high‑risk use cases?
  • Are Indigenous communities, unions, and frontline workers meaningfully consulted and included in governance decisions?
When these measures are in place and publicly reported, a guideline matures into a defensible policy regime; without them, the territory remains exposed to preventable privacy, legal and reputational incidents.

Conclusion: a pragmatic path from guidance to governance​

The GNWT’s decision to rely on a high‑level guideline and existing policies instead of producing a standalone AI policy has defensible motives: speed of adoption, leveraging vendor enterprise protections, and avoiding regulatory duplication. Yet high‑level guidance without operational policy risks implementation drift — where different departments adopt different practices, sensitive data slips through connectors, and AI outputs enter the public record without provenance or verification.
A responsible middle path is available. Keep the GNWT’s enterprise Copilot and AI Hub to reduce shadow AI and accelerate literacy, but simultaneously adopt the operational practices outlined above: formal DPIAs, risk‑tiered approvals, procurement safeguards, human‑in‑the‑loop rules, and explicit Indigenous engagement. Those steps convert a plausible guideline into an accountable, auditable public‑sector AI governance program capable of delivering productivity gains while protecting privacy, equity and public trust.
Source: cabinradio.ca NWT government has no plan to develop AI policy
 

The Northwest Territories government says it has no plan to develop a standalone artificial intelligence policy for its public service, relying instead on a high-level generative AII guideline issued in May 2025 and existing information-management rules — a posture that expert observers, union representatives and privacy officials warn is incomplete given the rapidly expanding use of AI in frontline services and decision-making.

Background​

Generative artificial intelligence — tools that produce text, images, audio and code from prompts — has moved from novelty to everyday utility in both private and public sectors. Governments across Canada and internationally have responded with a variety of instruments: national-level guidance, sector-specific rules, procurement controls, and, in some jurisdictions, mandatory impact assessments for automated decision systems. At the federal level, Canadian authorities have published guidance and a framework of controls addressing automated decision-making and the responsible use of generative AI; those federal instruments set baseline expectations that provinces and territories can adopt or augment.
The Government of the Northwest Territories (GNWT) released a concise, high-level guideline on generative AI in May 2025 that encourages responsible use, training and safeguards. GNWT officials report that employees have access to an internal “AI Hub” with training, and that Microsoft Copilot is the sanctioned enterprise tool for many staff — deployed under enterprise protections the government describes as secure. The GNWT has not, however, moved to create a standalone, detailed AI policy that specifies governance, vetting, operational approvals, or mandatory impact assessments for new AI tools.

What the GNWT guideline says — and what it leaves unsaid​

The GNWT guideline is intentionally brief and advisory in tone. It recommends that departments:
  • establish clear rules and responsibilities for using generative AI;
  • put safeguards in place to protect data and manage risks;
  • provide information on why, how and when generative AI is used;
  • monitor generative AI systems and outcomes.
The territorial finance department also reports that GNWT employees are offered general generative AI training and tool-specific training for Microsoft Copilot. Officials say the Copilot instance available to GNWT employees is configured within Microsoft’s enterprise environment so that data remains within the government’s tenancy and is subject to its information-management controls. The GNWT also indicates it has relied on privacy impact assessments and legal reviews performed by other jurisdictions as part of its vetting process rather than conducting its own, jurisdiction-specific privacy impact assessment.
What the guideline does not do, however, is:
  • require a mandatory, documented assessment for each use case (for example, an Algorithmic Impact Assessment or equivalent);
  • define a formal governance body with delegated authority to approve or reject AI tools or projects;
  • publish an inventory of approved, piloted, or prohibited tools;
  • set minimum contractual requirements for vendors (data residency, training exclusions, logging/auditability);
  • lay out reporting obligations or redress mechanisms when an AI system harms individuals or communities.
These omissions matter: high-level guidance can signal intent, but policy is what creates enforceable guardrails in procurement, operations and accountability.

Voices inside and outside the GNWT: confidence, caution and critique​

Officials, including the territorial minister of finance, describe GNWT cybersecurity and information management practices as robust and emphasize ongoing training. They stress that employees are expected to review AI outputs and that Copilot is provisioned in an enterprise configuration designed to protect information.
Still, multiple stakeholders interviewed or quoted by nearby reporting raise concerns:
  • A GNWT employee speaking anonymously described the guideline as “just suggestions” and said existing security and records-management policies were not written with AI in mind, raising questions about oversight and accountability.
  • The NWT Information and Privacy Commissioner welcomed the guideline as a sign the territory is paying attention, but described it as a statement of intent rather than an operational policy, and said more tailored rules are needed for particular AI applications.
  • Academic experts in information law called the guideline “unfocused and unclear,” noting it conflates departmental deployments with ad-hoc individual use and does not define decision-making authority or vetting procedures.
  • The Union of Northern Workers expressed concerns about AI replacing culturally sensitive human-delivered services, potential impacts on bargaining-unit work, and how errors from AI could affect employee performance reviews.
  • Legal regulators and courts in the territory have moved ahead of the GNWT in some respects: the Law Society of the Northwest Territories published practice guidelines for lawyers’ use of generative AI in early 2025, and the territory’s Supreme Court issued a notice urging caution about AI-generated court submissions and emphasizing verification requirements for documents that include AI-generated content.
These voices point to a common theme: a high-level guideline is a helpful first step, but not a substitute for enforceable policy and governance.

Real-world failures and near-misses underline the stakes​

The risks of insufficient governance are not hypothetical. Several incidents over recent years illustrate how AI errors can produce real-world consequences in public-sector contexts:
  • Reports in 2025 and 2026 identified government-commissioned consulting products and public documents that contained fabricated citations and other outputs likely produced by generative AI without sufficient human verification. Those errors triggered public criticism, demand for reviews, and political fallout.
  • Public agencies and first-responder organizations have publicly corrected or castigated AI-generated media used in official or semi-official channels when the content was inaccurate, sensationalized, or misleading.
  • In healthcare, jurisdictions piloting AI note-taking and scribing tools have had to implement explicit patient-consent processes, careful data governance and clinical verification steps to protect privacy and clinical safety.
These examples demonstrate three persistent failure modes when AI governance is weak:
  1. Hallucination and misinformation: AI models can create plausible but false statements — citations, facts, or legal arguments — that, if unverified, can contaminate decision-making or public communications.
  2. Data leakage and privacy breaches: Without controls, staff might input sensitive or culturally protected data into public AI services, risking exposure and misuse.
  3. Operational and reputational harms: When AI-generated products are used in core services (healthcare notes, court transcripts, public reports) errors can harm individuals, undermine trust, or impose costs to remediate.

Why a standalone AI policy matters for a small public service​

The GNWT’s operating context amplifies the need for clear rules:
  • The territorial public service is relatively small and many employees serve remote and Indigenous communities where cultural sensitivity and privacy are especially important.
  • Technical expertise and dedicated AI governance capacity are limited; without clear policy, the burden of safe deployment falls informally to individual managers or staff.
  • AI adoption is accelerating in specific domains — healthcare scribing pilots, court-transcript automation, and internal productivity tools — each of which carries distinct risks and benefit profiles.
A comprehensive AI policy tailored to the territory would do more than restrict tools: it would provide clarity about who makes decisions, how risks are assessed, and how communities are consulted, especially Indigenous governments and organizations.

What best-practice AI governance looks like — practical elements GNWT should consider​

A robust AI policy suitable for a public service of the GNWT’s size and mandate should include the following elements. These are actionable, auditable and aligned with contemporary public-sector practice.

Governance and accountability​

  • Designate an AI lead and governance committee with cross-departmental representation (IT, privacy, legal, policy, program owners, Indigenous engagement). That body should have authority to approve pilots, require impact assessments, and escalate non-compliance.
  • Define roles and responsibilities clearly: who vets vendors, who approves access to tools, who is responsible for incident response, and who maintains the approved tool inventory.

Risk assessment and approvals​

  • Mandatory impact assessments for every AI deployment. These should mirror the principles behind algorithmic impact assessments and privacy impact assessments: identify data flows, actors, downstream impacts, and mitigation measures.
  • Categorize AI use cases by risk level (low, medium, high) and require progressively stronger controls for higher-risk applications (e.g., direct client-facing decisions, clinical documentation, legal filings).

Data governance and vendor controls​

  • Adopt data classification tied to AI use: explicitly forbid entering personal health information, protected Indigenous knowledge, or classified materials into unvetted external AI services.
  • Procurement requirements that demand vendor assurances about data residency, non-use of customer data for model training unless consented, logging, auditability, and the right to forensic review.
  • Contractual clauses that preserve government ownership of data and outputs, require explainability support, and permit third-party audits.

Human oversight and operational controls​

  • Human-in-the-loop rules that make final responsibility clear: AI may assist, but humans validate and are accountable for outputs used in decisions or public materials.
  • Clear direction on record-keeping: outputs generated or modified by AI must be recorded, retained, and discoverable under records-retention rules.
  • Training and change management: mandatory, role-based training for staff including model limitations, data-handling rules, and red-flag scenarios.

Transparency, community engagement and cultural safeguard​

  • Public transparency where appropriate: publish where and why AI is deployed, especially for public-facing services or decision-making systems.
  • Engagement with Indigenous communities and meaningful consultation on culturally sensitive data or service delivery — including recognition of Indigenous data sovereignty principles.
  • Clear consent pathways for services involving personal or health data, with opt-out options and plain-language explanations.

Monitoring, auditing and remediation​

  • Continuous monitoring for bias, accuracy drift, and privacy incidents; logging must enable retrospective audits.
  • Incident response and remediation plans that address harms to individuals (e.g., incorrect medical notes, erroneous court transcripts), including correction processes and reporting obligations.
  • Environmental considerations: monitor and mitigate carbon footprint where large-scale model usage is significant; include AI activity in greenhouse gas reporting where appropriate.

A practical, phased roadmap GNWT can adopt immediately​

The GNWT does not need to wait to build a full policy. A pragmatic, phased rollout will manage risk while enabling beneficial pilots.
  1. Appoint an AI lead and convene a cross-departmental governance committee.
  2. Create an AI inventory and risk-categorize current pilots and tools within 60 days.
  3. Require an impact assessment (privacy + algorithmic) for every medium- or high-risk project before further rollout.
  4. Implement an “approved tool list” policy: only tools vetted and contractually secured may be used for sensitive work.
  5. Deploy sandboxes for pilots with logging and defined evaluation criteria (accuracy, equity, privacy, environmental cost).
  6. Publish a public statement of principles and a timeline for a formal AI policy and community consultation, including Indigenous partners.
  7. Mandate role-based training for all staff with access to AI tools and enforce data-handling rules.
  8. Establish regular audit schedules and public reporting on AI deployments, incidents and corrective actions.
This sequence balances speed and safety, enabling the government to learn from pilots while establishing enforceable guardrails.

Microsoft Copilot and the GNWT: what “secure instance” means — and what it doesn’t​

Microsoft Copilot (the enterprise Copilot for Microsoft 365) is widely marketed to public-sector customers as an integrated, tenant-bound assistant that respects organizational access controls and enterprise data protections. When properly configured at the enterprise level, Copilot’s processing is bounded by the tenant environment, and vendors typically contractually state that customer data will not be used to train public, shared foundation models without consent.
But those protections are not automatic or limitless:
  • Enterprise deployments inherit the permission model of Microsoft 365: if a document is broadly shared, Copilot can surface it. Security gaps in sharing or permissions will be visible to Copilot in predictable ways.
  • Organizations must still set retention, audit and Purview policies, and train staff on what not to paste into prompts (e.g., unredacted personal health information).
  • Contract terms and administrative configuration determine whether interaction logs, prompts or outputs are retained and how they are accessible for audits.
A “secure Copilot” can be highly protective — but only with active governance, careful configuration, auditing and staff training.

Frontline guidance: what GNWT employees and unions should expect now​

In the absence of a binding policy, the GNWT should at minimum require the following as standard operating practice:
  • Never input sensitive personal, health, Indigenous cultural or classified information into public-facing, unvetted AI services.
  • Always verify AI outputs before using them in clinical notes, legal documents, official reports, or public posts.
  • Document when AI was used to generate or edit a deliverable, and retain the prompts and outputs in accordance with records policies.
  • Obtain explicit, informed consent from service users when AI tools are used in care settings (e.g., AI scribing for clinical visits).
  • Ensure that AI use does not substitute for bargaining-unit work or be used to unfairly evaluate employees.
Unions and employee associations should press for clear protections: training, job-safety assurances, transparent criteria for AI-assisted performance review, and explicit limits against AI replacing unionized roles without negotiation.

Legal and cultural obligations: beyond technical controls​

AI governance is not purely technical. For the GNWT this means:
  • honoring professional duties in regulated sectors (lawyers, health professionals, justice workers) that already require verification and client consent;
  • respecting Indigenous data sovereignty by consulting and building protections for culturally sensitive knowledge and community data;
  • ensuring administrative fairness and individual rights by applying human oversight to decisions affecting benefits, services or legal rights.
These obligations point to policy elements that are fundamentally legal and ethical, not optional technicalities.

Conclusion — a narrow window to get governance right​

The GNWT’s current approach — a high-level guideline plus reliance on existing information-management policies — is a start, but risks leaving the territory exposed to avoidable harms as generative AI moves into core services. The technology’s capacity to accelerate work and reduce administrative burden is real and valuable; yet so are the risks of misinformation, privacy breaches, biased outcomes, and cultural harm when tools are adopted without rigorous vetting, governance and community engagement.
A pragmatic, staged AI policy that sets enforceable requirements for impact assessments, vendor contracts, human oversight, Indigenous consultation and public transparency would let the territory reap benefits while protecting residents and employees. The alternative is an ad-hoc patchwork that invites operational mistakes, legal headaches and erosion of public trust — costs that can far exceed the short-term efficiencies AI promises.
The GNWT can preserve the promise of generative AI by turning its guideline into policy: appoint accountable leadership, require documented risk assessments, publish an approved tools list, and commit to meaningful Indigenous and public engagement. Doing so will align the territory with contemporary public-sector practice and ensure generative AI is used responsibly, transparently and in service of the people the GNWT serves.

Source: cabinradio.ca NWT government has no plan to develop AI policy
 

The Northwest Territories government says it has no plan to develop a standalone artificial intelligence policy for its public service, relying instead on a high-level generative AII guideline issued in May 2025 and existing information-management rules — a posture that expert observers, union representatives and privacy officials warn is incomplete given the rapidly expanding use of AI in frontline services and decision-making.

Background​

Generative artificial intelligence — tools that produce text, images, audio and code from prompts — has moved from novelty to everyday utility in both private and public sectors. Governments across Canada and internationally have responded with a variety of instruments: national-level guidance, sector-specific rules, procurement controls, and, in some jurisdictions, mandatory impact assessments for automated decision systems. At the federal level, Canadian authorities have published guidance and a framework of controls addressing automated decision-making and the responsible use of generative AI; those federal instruments set baseline expectations that provinces and territories can adopt or augment.
The Government of the Northwest Territories (GNWT) released a concise, high-level guideline on generative AI in May 2025 that encourages responsible use, training and safeguards. GNWT officials report that employees have access to an internal “AI Hub” with training, and that Microsoft Copilot is the sanctioned enterprise tool for many staff — deployed under enterprise protections the government describes as secure. The GNWT has not, however, moved to create a standalone, detailed AI policy that specifies governance, vetting, operational approvals, or mandatory impact assessments for new AI tools.

What the GNWT guideline says — and what it leaves unsaid​

The GNWT guideline is intentionally brief and advisory in tone. It recommends that departments:
  • establish clear rules and responsibilities for using generative AI;
  • put safeguards in place to protect data and manage risks;
  • provide information on why, how and when generative AI is used;
  • monitor generative AI systems and outcomes.
The territorial finance department also reports that GNWT employees are offered general generative AI training and tool-specific training for Microsoft Copilot. Officials say the Copilot instance available to GNWT employees is configured within Microsoft’s enterprise environment so that data remains within the government’s tenancy and is subject to its information-management controls. The GNWT also indicates it has relied on privacy impact assessments and legal reviews performed by other jurisdictions as part of its vetting process rather than conducting its own, jurisdiction-specific privacy impact assessment.
What the guideline does not do, however, is:
  • require a mandatory, documented assessment for each use case (for example, an Algorithmic Impact Assessment or equivalent);
  • define a formal governance body with delegated authority to approve or reject AI tools or projects;
  • publish an inventory of approved, piloted, or prohibited tools;
  • set minimum contractual requirements for vendors (data residency, training exclusions, logging/auditability);
  • lay out reporting obligations or redress mechanisms when an AI system harms individuals or communities.
These omissions matter: high-level guidance can signal intent, but policy is what creates enforceable guardrails in procurement, operations and accountability.

Voices inside and outside the GNWT: confidence, caution and critique​

Officials, including the territorial minister of finance, describe GNWT cybersecurity and information management practices as robust and emphasize ongoing training. They stress that employees are expected to review AI outputs and that Copilot is provisioned in an enterprise configuration designed to protect information.
Still, multiple stakeholders interviewed or quoted by nearby reporting raise concerns:
  • A GNWT employee speaking anonymously described the guideline as “just suggestions” and said existing security and records-management policies were not written with AI in mind, raising questions about oversight and accountability.
  • The NWT Information and Privacy Commissioner welcomed the guideline as a sign the territory is paying attention, but described it as a statement of intent rather than an operational policy, and said more tailored rules are needed for particular AI applications.
  • Academic experts in information law called the guideline “unfocused and unclear,” noting it conflates departmental deployments with ad-hoc individual use and does not define decision-making authority or vetting procedures.
  • The Union of Northern Workers expressed concerns about AI replacing culturally sensitive human-delivered services, potential impacts on bargaining-unit work, and how errors from AI could affect employee performance reviews.
  • Legal regulators and courts in the territory have moved ahead of the GNWT in some respects: the Law Society of the Northwest Territories published practice guidelines for lawyers’ use of generative AI in early 2025, and the territory’s Supreme Court issued a notice urging caution about AI-generated court submissions and emphasizing verification requirements for documents that include AI-generated content.
These voices point to a common theme: a high-level guideline is a helpful first step, but not a substitute for enforceable policy and governance.

Real-world failures and near-misses underline the stakes​

The risks of insufficient governance are not hypothetical. Several incidents over recent years illustrate how AI errors can produce real-world consequences in public-sector contexts:
  • Reports in 2025 and 2026 identified government-commissioned consulting products and public documents that contained fabricated citations and other outputs likely produced by generative AI without sufficient human verification. Those errors triggered public criticism, demand for reviews, and political fallout.
  • Public agencies and first-responder organizations have publicly corrected or castigated AI-generated media used in official or semi-official channels when the content was inaccurate, sensationalized, or misleading.
  • In healthcare, jurisdictions piloting AI note-taking and scribing tools have had to implement explicit patient-consent processes, careful data governance and clinical verification steps to protect privacy and clinical safety.
These examples demonstrate three persistent failure modes when AI governance is weak:
  1. Hallucination and misinformation: AI models can create plausible but false statements — citations, facts, or legal arguments — that, if unverified, can contaminate decision-making or public communications.
  2. Data leakage and privacy breaches: Without controls, staff might input sensitive or culturally protected data into public AI services, risking exposure and misuse.
  3. Operational and reputational harms: When AI-generated products are used in core services (healthcare notes, court transcripts, public reports) errors can harm individuals, undermine trust, or impose costs to remediate.

Why a standalone AI policy matters for a small public service​

The GNWT’s operating context amplifies the need for clear rules:
  • The territorial public service is relatively small and many employees serve remote and Indigenous communities where cultural sensitivity and privacy are especially important.
  • Technical expertise and dedicated AI governance capacity are limited; without clear policy, the burden of safe deployment falls informally to individual managers or staff.
  • AI adoption is accelerating in specific domains — healthcare scribing pilots, court-transcript automation, and internal productivity tools — each of which carries distinct risks and benefit profiles.
A comprehensive AI policy tailored to the territory would do more than restrict tools: it would provide clarity about who makes decisions, how risks are assessed, and how communities are consulted, especially Indigenous governments and organizations.

What best-practice AI governance looks like — practical elements GNWT should consider​

A robust AI policy suitable for a public service of the GNWT’s size and mandate should include the following elements. These are actionable, auditable and aligned with contemporary public-sector practice.

Governance and accountability​

  • Designate an AI lead and governance committee with cross-departmental representation (IT, privacy, legal, policy, program owners, Indigenous engagement). That body should have authority to approve pilots, require impact assessments, and escalate non-compliance.
  • Define roles and responsibilities clearly: who vets vendors, who approves access to tools, who is responsible for incident response, and who maintains the approved tool inventory.

Risk assessment and approvals​

  • Mandatory impact assessments for every AI deployment. These should mirror the principles behind algorithmic impact assessments and privacy impact assessments: identify data flows, actors, downstream impacts, and mitigation measures.
  • Categorize AI use cases by risk level (low, medium, high) and require progressively stronger controls for higher-risk applications (e.g., direct client-facing decisions, clinical documentation, legal filings).

Data governance and vendor controls​

  • Adopt data classification tied to AI use: explicitly forbid entering personal health information, protected Indigenous knowledge, or classified materials into unvetted external AI services.
  • Procurement requirements that demand vendor assurances about data residency, non-use of customer data for model training unless consented, logging, auditability, and the right to forensic review.
  • Contractual clauses that preserve government ownership of data and outputs, require explainability support, and permit third-party audits.

Human oversight and operational controls​

  • Human-in-the-loop rules that make final responsibility clear: AI may assist, but humans validate and are accountable for outputs used in decisions or public materials.
  • Clear direction on record-keeping: outputs generated or modified by AI must be recorded, retained, and discoverable under records-retention rules.
  • Training and change management: mandatory, role-based training for staff including model limitations, data-handling rules, and red-flag scenarios.

Transparency, community engagement and cultural safeguard​

  • Public transparency where appropriate: publish where and why AI is deployed, especially for public-facing services or decision-making systems.
  • Engagement with Indigenous communities and meaningful consultation on culturally sensitive data or service delivery — including recognition of Indigenous data sovereignty principles.
  • Clear consent pathways for services involving personal or health data, with opt-out options and plain-language explanations.

Monitoring, auditing and remediation​

  • Continuous monitoring for bias, accuracy drift, and privacy incidents; logging must enable retrospective audits.
  • Incident response and remediation plans that address harms to individuals (e.g., incorrect medical notes, erroneous court transcripts), including correction processes and reporting obligations.
  • Environmental considerations: monitor and mitigate carbon footprint where large-scale model usage is significant; include AI activity in greenhouse gas reporting where appropriate.

A practical, phased roadmap GNWT can adopt immediately​

The GNWT does not need to wait to build a full policy. A pragmatic, phased rollout will manage risk while enabling beneficial pilots.
  1. Appoint an AI lead and convene a cross-departmental governance committee.
  2. Create an AI inventory and risk-categorize current pilots and tools within 60 days.
  3. Require an impact assessment (privacy + algorithmic) for every medium- or high-risk project before further rollout.
  4. Implement an “approved tool list” policy: only tools vetted and contractually secured may be used for sensitive work.
  5. Deploy sandboxes for pilots with logging and defined evaluation criteria (accuracy, equity, privacy, environmental cost).
  6. Publish a public statement of principles and a timeline for a formal AI policy and community consultation, including Indigenous partners.
  7. Mandate role-based training for all staff with access to AI tools and enforce data-handling rules.
  8. Establish regular audit schedules and public reporting on AI deployments, incidents and corrective actions.
This sequence balances speed and safety, enabling the government to learn from pilots while establishing enforceable guardrails.

Microsoft Copilot and the GNWT: what “secure instance” means — and what it doesn’t​

Microsoft Copilot (the enterprise Copilot for Microsoft 365) is widely marketed to public-sector customers as an integrated, tenant-bound assistant that respects organizational access controls and enterprise data protections. When properly configured at the enterprise level, Copilot’s processing is bounded by the tenant environment, and vendors typically contractually state that customer data will not be used to train public, shared foundation models without consent.
But those protections are not automatic or limitless:
  • Enterprise deployments inherit the permission model of Microsoft 365: if a document is broadly shared, Copilot can surface it. Security gaps in sharing or permissions will be visible to Copilot in predictable ways.
  • Organizations must still set retention, audit and Purview policies, and train staff on what not to paste into prompts (e.g., unredacted personal health information).
  • Contract terms and administrative configuration determine whether interaction logs, prompts or outputs are retained and how they are accessible for audits.
A “secure Copilot” can be highly protective — but only with active governance, careful configuration, auditing and staff training.

Frontline guidance: what GNWT employees and unions should expect now​

In the absence of a binding policy, the GNWT should at minimum require the following as standard operating practice:
  • Never input sensitive personal, health, Indigenous cultural or classified information into public-facing, unvetted AI services.
  • Always verify AI outputs before using them in clinical notes, legal documents, official reports, or public posts.
  • Document when AI was used to generate or edit a deliverable, and retain the prompts and outputs in accordance with records policies.
  • Obtain explicit, informed consent from service users when AI tools are used in care settings (e.g., AI scribing for clinical visits).
  • Ensure that AI use does not substitute for bargaining-unit work or be used to unfairly evaluate employees.
Unions and employee associations should press for clear protections: training, job-safety assurances, transparent criteria for AI-assisted performance review, and explicit limits against AI replacing unionized roles without negotiation.

Legal and cultural obligations: beyond technical controls​

AI governance is not purely technical. For the GNWT this means:
  • honoring professional duties in regulated sectors (lawyers, health professionals, justice workers) that already require verification and client consent;
  • respecting Indigenous data sovereignty by consulting and building protections for culturally sensitive knowledge and community data;
  • ensuring administrative fairness and individual rights by applying human oversight to decisions affecting benefits, services or legal rights.
These obligations point to policy elements that are fundamentally legal and ethical, not optional technicalities.

Conclusion — a narrow window to get governance right​

The GNWT’s current approach — a high-level guideline plus reliance on existing information-management policies — is a start, but risks leaving the territory exposed to avoidable harms as generative AI moves into core services. The technology’s capacity to accelerate work and reduce administrative burden is real and valuable; yet so are the risks of misinformation, privacy breaches, biased outcomes, and cultural harm when tools are adopted without rigorous vetting, governance and community engagement.
A pragmatic, staged AI policy that sets enforceable requirements for impact assessments, vendor contracts, human oversight, Indigenous consultation and public transparency would let the territory reap benefits while protecting residents and employees. The alternative is an ad-hoc patchwork that invites operational mistakes, legal headaches and erosion of public trust — costs that can far exceed the short-term efficiencies AI promises.
The GNWT can preserve the promise of generative AI by turning its guideline into policy: appoint accountable leadership, require documented risk assessments, publish an approved tools list, and commit to meaningful Indigenous and public engagement. Doing so will align the territory with contemporary public-sector practice and ensure generative AI is used responsibly, transparently and in service of the people the GNWT serves.

Source: cabinradio.ca NWT government has no plan to develop AI policy
 

The Northwest Territories government says it has no plan to develop a standalone artificial intelligence policy for its public service, relying instead on a high-level generative AII guideline issued in May 2025 and existing information-management rules — a posture that expert observers, union representatives and privacy officials warn is incomplete given the rapidly expanding use of AI in frontline services and decision-making.

Background​

Generative artificial intelligence — tools that produce text, images, audio and code from prompts — has moved from novelty to everyday utility in both private and public sectors. Governments across Canada and internationally have responded with a variety of instruments: national-level guidance, sector-specific rules, procurement controls, and, in some jurisdictions, mandatory impact assessments for automated decision systems. At the federal level, Canadian authorities have published guidance and a framework of controls addressing automated decision-making and the responsible use of generative AI; those federal instruments set baseline expectations that provinces and territories can adopt or augment.
The Government of the Northwest Territories (GNWT) released a concise, high-level guideline on generative AI in May 2025 that encourages responsible use, training and safeguards. GNWT officials report that employees have access to an internal “AI Hub” with training, and that Microsoft Copilot is the sanctioned enterprise tool for many staff — deployed under enterprise protections the government describes as secure. The GNWT has not, however, moved to create a standalone, detailed AI policy that specifies governance, vetting, operational approvals, or mandatory impact assessments for new AI tools.

What the GNWT guideline says — and what it leaves unsaid​

The GNWT guideline is intentionally brief and advisory in tone. It recommends that departments:
  • establish clear rules and responsibilities for using generative AI;
  • put safeguards in place to protect data and manage risks;
  • provide information on why, how and when generative AI is used;
  • monitor generative AI systems and outcomes.
The territorial finance department also reports that GNWT employees are offered general generative AI training and tool-specific training for Microsoft Copilot. Officials say the Copilot instance available to GNWT employees is configured within Microsoft’s enterprise environment so that data remains within the government’s tenancy and is subject to its information-management controls. The GNWT also indicates it has relied on privacy impact assessments and legal reviews performed by other jurisdictions as part of its vetting process rather than conducting its own, jurisdiction-specific privacy impact assessment.
What the guideline does not do, however, is:
  • require a mandatory, documented assessment for each use case (for example, an Algorithmic Impact Assessment or equivalent);
  • define a formal governance body with delegated authority to approve or reject AI tools or projects;
  • publish an inventory of approved, piloted, or prohibited tools;
  • set minimum contractual requirements for vendors (data residency, training exclusions, logging/auditability);
  • lay out reporting obligations or redress mechanisms when an AI system harms individuals or communities.
These omissions matter: high-level guidance can signal intent, but policy is what creates enforceable guardrails in procurement, operations and accountability.

Voices inside and outside the GNWT: confidence, caution and critique​

Officials, including the territorial minister of finance, describe GNWT cybersecurity and information management practices as robust and emphasize ongoing training. They stress that employees are expected to review AI outputs and that Copilot is provisioned in an enterprise configuration designed to protect information.
Still, multiple stakeholders interviewed or quoted by nearby reporting raise concerns:
  • A GNWT employee speaking anonymously described the guideline as “just suggestions” and said existing security and records-management policies were not written with AI in mind, raising questions about oversight and accountability.
  • The NWT Information and Privacy Commissioner welcomed the guideline as a sign the territory is paying attention, but described it as a statement of intent rather than an operational policy, and said more tailored rules are needed for particular AI applications.
  • Academic experts in information law called the guideline “unfocused and unclear,” noting it conflates departmental deployments with ad-hoc individual use and does not define decision-making authority or vetting procedures.
  • The Union of Northern Workers expressed concerns about AI replacing culturally sensitive human-delivered services, potential impacts on bargaining-unit work, and how errors from AI could affect employee performance reviews.
  • Legal regulators and courts in the territory have moved ahead of the GNWT in some respects: the Law Society of the Northwest Territories published practice guidelines for lawyers’ use of generative AI in early 2025, and the territory’s Supreme Court issued a notice urging caution about AI-generated court submissions and emphasizing verification requirements for documents that include AI-generated content.
These voices point to a common theme: a high-level guideline is a helpful first step, but not a substitute for enforceable policy and governance.

Real-world failures and near-misses underline the stakes​

The risks of insufficient governance are not hypothetical. Several incidents over recent years illustrate how AI errors can produce real-world consequences in public-sector contexts:
  • Reports in 2025 and 2026 identified government-commissioned consulting products and public documents that contained fabricated citations and other outputs likely produced by generative AI without sufficient human verification. Those errors triggered public criticism, demand for reviews, and political fallout.
  • Public agencies and first-responder organizations have publicly corrected or castigated AI-generated media used in official or semi-official channels when the content was inaccurate, sensationalized, or misleading.
  • In healthcare, jurisdictions piloting AI note-taking and scribing tools have had to implement explicit patient-consent processes, careful data governance and clinical verification steps to protect privacy and clinical safety.
These examples demonstrate three persistent failure modes when AI governance is weak:
  1. Hallucination and misinformation: AI models can create plausible but false statements — citations, facts, or legal arguments — that, if unverified, can contaminate decision-making or public communications.
  2. Data leakage and privacy breaches: Without controls, staff might input sensitive or culturally protected data into public AI services, risking exposure and misuse.
  3. Operational and reputational harms: When AI-generated products are used in core services (healthcare notes, court transcripts, public reports) errors can harm individuals, undermine trust, or impose costs to remediate.

Why a standalone AI policy matters for a small public service​

The GNWT’s operating context amplifies the need for clear rules:
  • The territorial public service is relatively small and many employees serve remote and Indigenous communities where cultural sensitivity and privacy are especially important.
  • Technical expertise and dedicated AI governance capacity are limited; without clear policy, the burden of safe deployment falls informally to individual managers or staff.
  • AI adoption is accelerating in specific domains — healthcare scribing pilots, court-transcript automation, and internal productivity tools — each of which carries distinct risks and benefit profiles.
A comprehensive AI policy tailored to the territory would do more than restrict tools: it would provide clarity about who makes decisions, how risks are assessed, and how communities are consulted, especially Indigenous governments and organizations.

What best-practice AI governance looks like — practical elements GNWT should consider​

A robust AI policy suitable for a public service of the GNWT’s size and mandate should include the following elements. These are actionable, auditable and aligned with contemporary public-sector practice.

Governance and accountability​

  • Designate an AI lead and governance committee with cross-departmental representation (IT, privacy, legal, policy, program owners, Indigenous engagement). That body should have authority to approve pilots, require impact assessments, and escalate non-compliance.
  • Define roles and responsibilities clearly: who vets vendors, who approves access to tools, who is responsible for incident response, and who maintains the approved tool inventory.

Risk assessment and approvals​

  • Mandatory impact assessments for every AI deployment. These should mirror the principles behind algorithmic impact assessments and privacy impact assessments: identify data flows, actors, downstream impacts, and mitigation measures.
  • Categorize AI use cases by risk level (low, medium, high) and require progressively stronger controls for higher-risk applications (e.g., direct client-facing decisions, clinical documentation, legal filings).

Data governance and vendor controls​

  • Adopt data classification tied to AI use: explicitly forbid entering personal health information, protected Indigenous knowledge, or classified materials into unvetted external AI services.
  • Procurement requirements that demand vendor assurances about data residency, non-use of customer data for model training unless consented, logging, auditability, and the right to forensic review.
  • Contractual clauses that preserve government ownership of data and outputs, require explainability support, and permit third-party audits.

Human oversight and operational controls​

  • Human-in-the-loop rules that make final responsibility clear: AI may assist, but humans validate and are accountable for outputs used in decisions or public materials.
  • Clear direction on record-keeping: outputs generated or modified by AI must be recorded, retained, and discoverable under records-retention rules.
  • Training and change management: mandatory, role-based training for staff including model limitations, data-handling rules, and red-flag scenarios.

Transparency, community engagement and cultural safeguard​

  • Public transparency where appropriate: publish where and why AI is deployed, especially for public-facing services or decision-making systems.
  • Engagement with Indigenous communities and meaningful consultation on culturally sensitive data or service delivery — including recognition of Indigenous data sovereignty principles.
  • Clear consent pathways for services involving personal or health data, with opt-out options and plain-language explanations.

Monitoring, auditing and remediation​

  • Continuous monitoring for bias, accuracy drift, and privacy incidents; logging must enable retrospective audits.
  • Incident response and remediation plans that address harms to individuals (e.g., incorrect medical notes, erroneous court transcripts), including correction processes and reporting obligations.
  • Environmental considerations: monitor and mitigate carbon footprint where large-scale model usage is significant; include AI activity in greenhouse gas reporting where appropriate.

A practical, phased roadmap GNWT can adopt immediately​

The GNWT does not need to wait to build a full policy. A pragmatic, phased rollout will manage risk while enabling beneficial pilots.
  1. Appoint an AI lead and convene a cross-departmental governance committee.
  2. Create an AI inventory and risk-categorize current pilots and tools within 60 days.
  3. Require an impact assessment (privacy + algorithmic) for every medium- or high-risk project before further rollout.
  4. Implement an “approved tool list” policy: only tools vetted and contractually secured may be used for sensitive work.
  5. Deploy sandboxes for pilots with logging and defined evaluation criteria (accuracy, equity, privacy, environmental cost).
  6. Publish a public statement of principles and a timeline for a formal AI policy and community consultation, including Indigenous partners.
  7. Mandate role-based training for all staff with access to AI tools and enforce data-handling rules.
  8. Establish regular audit schedules and public reporting on AI deployments, incidents and corrective actions.
This sequence balances speed and safety, enabling the government to learn from pilots while establishing enforceable guardrails.

Microsoft Copilot and the GNWT: what “secure instance” means — and what it doesn’t​

Microsoft Copilot (the enterprise Copilot for Microsoft 365) is widely marketed to public-sector customers as an integrated, tenant-bound assistant that respects organizational access controls and enterprise data protections. When properly configured at the enterprise level, Copilot’s processing is bounded by the tenant environment, and vendors typically contractually state that customer data will not be used to train public, shared foundation models without consent.
But those protections are not automatic or limitless:
  • Enterprise deployments inherit the permission model of Microsoft 365: if a document is broadly shared, Copilot can surface it. Security gaps in sharing or permissions will be visible to Copilot in predictable ways.
  • Organizations must still set retention, audit and Purview policies, and train staff on what not to paste into prompts (e.g., unredacted personal health information).
  • Contract terms and administrative configuration determine whether interaction logs, prompts or outputs are retained and how they are accessible for audits.
A “secure Copilot” can be highly protective — but only with active governance, careful configuration, auditing and staff training.

Frontline guidance: what GNWT employees and unions should expect now​

In the absence of a binding policy, the GNWT should at minimum require the following as standard operating practice:
  • Never input sensitive personal, health, Indigenous cultural or classified information into public-facing, unvetted AI services.
  • Always verify AI outputs before using them in clinical notes, legal documents, official reports, or public posts.
  • Document when AI was used to generate or edit a deliverable, and retain the prompts and outputs in accordance with records policies.
  • Obtain explicit, informed consent from service users when AI tools are used in care settings (e.g., AI scribing for clinical visits).
  • Ensure that AI use does not substitute for bargaining-unit work or be used to unfairly evaluate employees.
Unions and employee associations should press for clear protections: training, job-safety assurances, transparent criteria for AI-assisted performance review, and explicit limits against AI replacing unionized roles without negotiation.

Legal and cultural obligations: beyond technical controls​

AI governance is not purely technical. For the GNWT this means:
  • honoring professional duties in regulated sectors (lawyers, health professionals, justice workers) that already require verification and client consent;
  • respecting Indigenous data sovereignty by consulting and building protections for culturally sensitive knowledge and community data;
  • ensuring administrative fairness and individual rights by applying human oversight to decisions affecting benefits, services or legal rights.
These obligations point to policy elements that are fundamentally legal and ethical, not optional technicalities.

Conclusion — a narrow window to get governance right​

The GNWT’s current approach — a high-level guideline plus reliance on existing information-management policies — is a start, but risks leaving the territory exposed to avoidable harms as generative AI moves into core services. The technology’s capacity to accelerate work and reduce administrative burden is real and valuable; yet so are the risks of misinformation, privacy breaches, biased outcomes, and cultural harm when tools are adopted without rigorous vetting, governance and community engagement.
A pragmatic, staged AI policy that sets enforceable requirements for impact assessments, vendor contracts, human oversight, Indigenous consultation and public transparency would let the territory reap benefits while protecting residents and employees. The alternative is an ad-hoc patchwork that invites operational mistakes, legal headaches and erosion of public trust — costs that can far exceed the short-term efficiencies AI promises.
The GNWT can preserve the promise of generative AI by turning its guideline into policy: appoint accountable leadership, require documented risk assessments, publish an approved tools list, and commit to meaningful Indigenous and public engagement. Doing so will align the territory with contemporary public-sector practice and ensure generative AI is used responsibly, transparently and in service of the people the GNWT serves.

Source: cabinradio.ca NWT government has no plan to develop AI policy
 

Back
Top