The Regional District of Okanagan‑Similkameen (RDOS) has moved from pilot to policy, proposing a tightly scoped AI use framework that would permit only a tenant‑bound Microsoft Copilot for staff, limit use to low‑risk drafting and summarization tasks, require disclosure and managerial review of AI‑assisted outputs, and tout a headline annual saving of roughly $91,000 — a figure that public reporting so far does not fully substantiate.
Municipalities and regional districts across Canada have taken a cautious, pilot‑first approach to generative AI: short experiments, evaluation of measured benefits, then narrow policies before any wider rollout. RDOS followed that pattern with a Copilot pilot during the summer months that, according to staff summaries published in local reporting, involved roughly a 16–20 person cohort and produced an aggregate time saving described as the equivalent of 15 work‑days in one month. The pilot is now the basis for a draft “appropriate use” policy going before the RDOS board that would restrict staff to an enterprise Copilot deployment, with explicit bans on sending personal, confidential or restricted data to AI tools and manager‑level gatekeeping for licence issuance.
This move mirrors actions taken by similarly sized local governments that favour an “enterprise‑first” posture: they prefer a tenancy‑bound AI assistant that integrates with existing Microsoft 365 controls (identity, Purview DLP, conditional access) rather than allowing unrestricted use of public consumer chatbots. That posture reduces some attack surface but shifts the governance burden to correct tenant configuration, procurement safeguards, records management, and staff training.
For staff, a sanctioned, supported Copilot can reduce administrative burden and speed routine drafting. But this benefit requires training, ongoing stewardship, and cultural change so that AI is used as a productivity amplifier rather than a shortcut that bypasses verification. Managers must be resourced to monitor usage and validate outputs.
Yet the policy’s protective value depends entirely on operational detail. The board should withhold an open, long‑term commitment until (a) the pilot workbook and methodology are published and validated, (b) a tenant configuration attestation and a procurement package with non‑training and deletion clauses are provided, and (c) funded training, telemetry and FOI/records processes are in place. Without these, the apparent benefits may erode under real‑world usage and the district could face avoidable legal, financial and reputational risk.
RDOS’s draft is the responsible middle path between an outright ban that forgoes efficiency and an unrestricted free‑for‑all that exposes resident data and public records to risk. With the pilot workbook, a tenant attestation, procurement protections and funded operational controls in hand, the board can turn a promising experiment into a defensible, auditable program that captures productivity gains while protecting the public interest.
Source: Penticton Herald RDOS lays down rules for AI use
Background
Municipalities and regional districts across Canada have taken a cautious, pilot‑first approach to generative AI: short experiments, evaluation of measured benefits, then narrow policies before any wider rollout. RDOS followed that pattern with a Copilot pilot during the summer months that, according to staff summaries published in local reporting, involved roughly a 16–20 person cohort and produced an aggregate time saving described as the equivalent of 15 work‑days in one month. The pilot is now the basis for a draft “appropriate use” policy going before the RDOS board that would restrict staff to an enterprise Copilot deployment, with explicit bans on sending personal, confidential or restricted data to AI tools and manager‑level gatekeeping for licence issuance.This move mirrors actions taken by similarly sized local governments that favour an “enterprise‑first” posture: they prefer a tenancy‑bound AI assistant that integrates with existing Microsoft 365 controls (identity, Purview DLP, conditional access) rather than allowing unrestricted use of public consumer chatbots. That posture reduces some attack surface but shifts the governance burden to correct tenant configuration, procurement safeguards, records management, and staff training.
What RDOS proposes — the policy in plain language
- Whitelist approach: only Microsoft Copilot would be sanctioned for official business. Licences would be approved by departmental managers and overseen by Information Services.
- Narrow permitted uses: drafting internal emails, routine reports and communications, summarizing non‑confidential public information, and editorial/formatting assistance. High‑risk use cases — enforcement, legal determinations, or decisions affecting residents — would be excluded.
- Mandatory disclosure and human sign‑off: any AI‑assisted draft must be disclosed as such and reviewed by a human before finalisation. The RDOS board agenda item itself disclosed AI assistance, demonstrating the transparency rule in practice.
- Data boundaries: strict prohibition on entering personal, confidential, or restricted data into the AI interface; technical controls and training to reinforce the rule are anticipated but not fully detailed in public reports.
Verifying the technical claims: what can be checked from public sources
Copilot cost per seat
Microsoft’s public pricing lists Microsoft 365 Copilot at US$30 per user per month (annual billing) for the enterprise plan, a figure that is the right order of magnitude to use in multi‑year OPEX modelling. This price is a meaningful recurring cost that must be netted against any claimed productivity gains. Practical note: Microsoft has periodically adjusted bundles and commercial offerings — for example, bundling specialist Copilots (Sales/Service/Finance) into main packages — and broader Microsoft 365 list prices have been subject to change, so any municipal budgeting must use current procurement quotes for the exact SKU and market region.Data handling and tenant controls
Microsoft documents and product pages describe an architecture in which enterprise Copilot can be configured to operate within a customer’s Microsoft 365 tenancy, inheriting identity and access controls (Entra ID), and enabling Purview/DLP rules to reduce the risk of inadvertent data leakage. Microsoft also states that, for enterprise Copilot deployments, customer prompts and tenant data are not used to train the foundational Copilot models unless a tenant explicitly opts in or contractual provisions differ. This tenancy‑bound model is the primary justification RDOS staff use for choosing Copilot over public consumer chatbots. Caveat: vendor documentation is authoritative about feature design but not a substitute for contract terms. Municipal procurement must secure explicit contractual protections (non‑training clauses, deletion/export rights, audit access) and verify the tenant is configured correctly. Public marketing claims do not create enforceable rights by themselves.The $91,000 savings claim — what’s verifiable
Local reporting attributed an estimated $91,000 in annual savings to scaling Copilot within RDOS. The pilot numbers described a 15 work‑day aggregate saving in one month, but the public reporting to date does not include the underlying pilot workbook, hourly rates, or extrapolation methodology that produced the $91k figure. That means the headline number is directional and plausible for routine drafting tasks — but not yet auditable. The board should demand the pilot workbook and a line‑by‑line calculation before relying on any specific budgetary benefit.Strengths of RDOS’s proposed approach
- Enterprise‑first posture reduces uncontrolled exposure: by sanctioning Copilot in tenant mode, RDOS gains the ability to apply existing Microsoft 365 controls (Purview, DLP, conditional access) rather than relying on network policing alone. This is an accepted, pragmatic risk‑reduction pattern for public bodies.
- Narrow, low‑risk use cases: restricting AI to drafting, editing and summarisation — tasks where hallucination risks are manageable with human oversight — keeps potential harms limited while allowing real productivity improvements for repetitive tasks like email and routine memos.
- Disclosure and human‑in‑the‑loop: mandating attribution and managerial review preserves accountability and helps ensure outputs are treated as drafts rather than authoritative documents. This principle aligns with recommended public‑sector practice.
- Pilot‑driven policymaking: using an observed pilot to develop policy is the right sequence — pilot, measure, write tight governance — and it allows the organisation to identify operational gaps before broader adoption.
Risks, blind spots and what the board must insist on before adopting a long‑term mandate
1) Opaque ROI mechanics and budget risk
The $91,000 estimate needs a public, auditable workbook: hourly loaded labour rates, which tasks were measured, how time saved was recorded, whether savings were annualized directly from a single month and whether governance, training and incident response costs were included. Without this, the figure is a planning aide, not a budget certainty.2) Tenant misconfiguration and operational drift
Enterprise protections are only effective when correctly configured. Misplaced connectors, relaxed DLP rules, or administrative role creep can silently widen data flows. The board should require a tenant configuration attestation and an independent audit (Purview/DLP/connector settings, retention rules) before any seats are widely provisioned.3) Records, FOI and prompt discoverability
Prompts, draft outputs and human edits may be discoverable under Freedom‑of‑Information regimes. The policy must define what counts as an official record, set retention windows for prompt logs, and publish redaction workflows. Otherwise the district faces legal and reputational exposure when FOI requests arrive.4) Shadow AI (unsanctioned use)
Banning consumer models for official business often drives staff to try public tools on personal devices. Technical and cultural controls — network restrictions, endpoint DLP, user‑friendly sanctioned tools and rapid IT support — are needed to reduce the incentive for shadow AI. Training and sensible licensing access also reduce noncompliant workarounds.5) Procurement and vendor lock‑in
Vendor marketing does not replace enforceable contract language. The district must insist on non‑training clauses, express deletion and export rights for prompts/telemetry, defined breach notification timelines, audit access and clear exit/transition obligations. Without these, long‑term legal exposure or vendor lock‑in is possible.6) Hallucination and decision‑facing risk
Even in drafting use cases, generative AI can invent plausible but false statements. For any content that influences policy, licensing, enforcement, or public health/safety decisions the policy should forbid unsupervised AI use or require a formal DPIA and a named human reviewer attesting to factual accuracy.A practical operational checklist the RDOS board can require immediately
- Publish the pilot workbook and methodology: full raw metrics, sampling methods, how “15 work‑days saved” was measured, hourly rates and the extrapolation to the $91,000 figure. Treat the workbook as a public annex to the staff report.
- Obtain a tenant security attestation: a 30‑day audit of Microsoft Purview, DLP rules, connector permissions, retention settings and prompt logging configuration, conducted by IT and verified by an independent third party where feasible.
- Strengthen procurement: require explicit non‑training clauses, deletion/export rights for prompts and telemetry, audit rights, data residency options if applicable, and clear exit provisions in any Copilot‑related contract. Make licence authorization conditional on these contract protections.
- Set a records and FOI policy for AI: define retention windows for prompts/outputs, publish redaction workflows for PII, and clarify when AI‑assisted materials are subject to disclosure. Produce a resident‑facing assurance statement when AI materially influences a decision or document.
- Make licences conditional on training and stewardship: require mandatory role‑based prompt hygiene and verification training, sign‑off by departmental AI stewards, and logged completion before issuing Copilot seats.
- Meter usage and budget lines: implement telemetry, quotas and consumption alerts; tie seat issuance to budget approvals and publish quarterly KPI dashboards (time saved, human edit rates, incidents, cost per seat).
Why these operational steps matter — the taxpayers’ and staff’s view
For taxpayers, the most expensive mistakes are not license fees but data breaches, FOI surprises and procurement gaps that can produce long legal tails or reputational damage. Licence costs are recurring but relatively predictable; governance and incident response can be far costlier and less predictable without pre‑negotiated contractual protections and operational controls.For staff, a sanctioned, supported Copilot can reduce administrative burden and speed routine drafting. But this benefit requires training, ongoing stewardship, and cultural change so that AI is used as a productivity amplifier rather than a shortcut that bypasses verification. Managers must be resourced to monitor usage and validate outputs.
How RDOS’s approach compares with best practice in other municipalities
Several towns and districts that have adopted AI policies follow the same layered model RDOS is proposing: pilot first, whitelist an enterprise tool, restrict use to low‑risk tasks, require disclosure and human sign‑off, and couple policy with procurement and tenant audits. These common elements are not theoretical — they’re the pragmatic response to the twin priorities of capturing productivity gains and protecting resident data. However, real success depends on follow‑through: policy without audits, contractual protections and training often becomes a paper exercise.Immediate recommendations for the RDOS board (concise)
- Approve a time‑limited, conditional expansion of the pilot rather than an open, ongoing licence buy: expand in phases and require the tenant attestation and procurement addenda first.
- Demand publication of the pilot workbook before any budgetary reliance on the $91k figure; treat the number as provisional until independently verifiable.
- Require that licences be conditional on mandatory role‑based training, a signed acceptable‑use acknowledgement, and assignment of departmental AI stewards responsible for compliance.
- Mandate quarterly public reporting for the first year, including KPIs: time saved, human edit rate, incidents and cost per seat. Make reviews part of the board agenda to maintain oversight.
Balanced verdict
RDOS’s draft policy is a prudent, conservative first step: choosing an enterprise Copilot, narrowing permitted uses, mandating disclosure and human review, and anchoring the decision in a real pilot are all defensible moves that reflect municipal best practice. These steps will likely reduce near‑term exposure compared with allowing unrestricted consumer AI on official devices.Yet the policy’s protective value depends entirely on operational detail. The board should withhold an open, long‑term commitment until (a) the pilot workbook and methodology are published and validated, (b) a tenant configuration attestation and a procurement package with non‑training and deletion clauses are provided, and (c) funded training, telemetry and FOI/records processes are in place. Without these, the apparent benefits may erode under real‑world usage and the district could face avoidable legal, financial and reputational risk.
Final practical cautions
- Treat the $91,000 headline as directional until the underlying math is disclosed and audited. Transparency matters for public trust and for sound budgeting.
- Do not substitute vendor marketing for contract language — procurement clauses are the legal guardrails that survive personnel or product changes.
- Operationalise the policy with technical enforcement (DLP, conditional access, endpoint rules) and cultural supports (training, stewards, clear resident notices) to avoid the well‑known “policy on paper” failure mode.
RDOS’s draft is the responsible middle path between an outright ban that forgoes efficiency and an unrestricted free‑for‑all that exposes resident data and public records to risk. With the pilot workbook, a tenant attestation, procurement protections and funded operational controls in hand, the board can turn a promising experiment into a defensible, auditable program that captures productivity gains while protecting the public interest.
Source: Penticton Herald RDOS lays down rules for AI use