The Regional District of Okanagan‑Similkameen (RDOS) is moving from experiment to policy: after a month‑long Copilot pilot in summer 2025, staff are being asked to adopt a draft AI policy that would allow only Microsoft Copilot for specific, low‑risk internal tasks — with mandatory disclosure, managerial review, and strict bans on sharing personal or confidential data with AI tools.
Municipalities across Canada and internationally have taken a cautious, pilot‑first approach to generative AI: short, measured trials that test productivity benefits while authorities put governance, procurement and records rules in place. The RDOS pilot — reported as a 20‑staff trial that saved the equivalent of 15 work‑days over its month‑long run — fits that pattern and is now the basis for a draft policy going before the RDOS Board on January 8 (committee of the whole) with a vote scheduled for January 22. The RDOS Board meets regularly at the RDOS offices on Martin Street in Penticton; the 2026 schedule confirms board dates and meeting locations on the RDOS site. This article examines what the RDOS proposes, places it in the context of municipal AI governance best practice, verifies key technical and cost claims where possible, and highlights practical strengths and unresolved risks the board should address before approving a longer‑term rollout.
Best practice recommendations include:
Source: Penticton Western News AI policy coming to Regional District of Okanagan Similkameen - Penticton Western News
Background / Overview
Municipalities across Canada and internationally have taken a cautious, pilot‑first approach to generative AI: short, measured trials that test productivity benefits while authorities put governance, procurement and records rules in place. The RDOS pilot — reported as a 20‑staff trial that saved the equivalent of 15 work‑days over its month‑long run — fits that pattern and is now the basis for a draft policy going before the RDOS Board on January 8 (committee of the whole) with a vote scheduled for January 22. The RDOS Board meets regularly at the RDOS offices on Martin Street in Penticton; the 2026 schedule confirms board dates and meeting locations on the RDOS site. This article examines what the RDOS proposes, places it in the context of municipal AI governance best practice, verifies key technical and cost claims where possible, and highlights practical strengths and unresolved risks the board should address before approving a longer‑term rollout.What the RDOS trial reportedly found
- The pilot used Microsoft Copilot and involved 20 staff members during a one‑month summer trial.
- RDOS reported the pilot produced a total time savings equivalent to 15 work‑days, with the biggest gains from faster email drafting.
- The draft policy under consideration would make Copilot the only allowed generative AI tool for staff and would restrict its use to:
- drafting internal emails, reports and communications,
- summarizing non‑confidential public information, and
- assisting with routine document editing.
- The policy would require disclosure any time AI was used to create content, ban the input of personal, confidential or restricted data to AI, and place monitoring and content‑review responsibilities with department managers and the Information Services team.
- RDOS claimed that with 16 staff on Copilot the district could save taxpayers $91,000 over a year, including subscription costs — a headline number that the board will need to back up with an explicit costing methodology.
Verifying the key technical and cost claims
Copilot licensing cost
Microsoft’s public pricing for Microsoft 365 Copilot lists a commercial price of approximately US$30 per user per month (annual billing), which is consistent with enterprise market reporting and Microsoft’s own product pages. That figure is material when judging RDOS’s $91,000 projection because subscription fees are a recurring operating cost that must be netted against any time/value gains.- Example math (transparent, not RDOS’s stated method): 16 seats × $30/user/month × 12 months = $5,760/year in subscription fees. If RDOS’s published $91,000 figure already includes subscription costs, then licensing represents a small portion of that claimed benefit. However, the underlying assumptions RDOS used (average loaded labor rates, which tasks were counted, whether the pilot’s one‑month savings were annualized directly, and whether the figure includes governance and training overhead) were not published in the article and therefore cannot be independently validated from public sources at this time.
The pilot productivity numbers
RDOS reported 15 work‑days saved over a month across the pilot cohort. If that is aggregated across all 20 participants (i.e., 15 total days saved in that one month), annualizing that result or scaling it to a different number of seats requires clear methodological steps — which the public article does not reproduce. Municipal pilot reports that translate time savings into dollar savings typically disclose the assumed hourly rates, overhead multipliers, and the method of extrapolation (for example, whether time saved on emails is converted to cost avoidance or redirected into higher‑value work). Without those inputs, the $91,000 headline number should be treated as directional, not definitive.The draft policy: what RDOS would allow (and forbid)
According to the public report, the draft RDOS policy contains several conservative guardrails that align with emerging municipal practice:- Whitelist approach: only Microsoft Copilot would be permitted as a generative AI tool for staff. This reduces the risks of uncontrolled use of public consumer models.
- Permitted use cases limited to low‑risk tasks: drafting internal communications, summarizing non‑confidential public information, and routine editing. This excludes AI use in formal decision‑making, enforcement, or high‑sensitivity processes.
- Mandatory disclosure: staff would be required to disclose when an AI tool was used to produce a draft. The agenda item for the RDOS board itself was reported to have been drafted using Copilot and disclosed that fact — an early demonstration of the transparency rule.
- Data boundaries: explicit prohibition on sending personal, confidential or restricted data to AI tools. That reflects a simple but crucial data‑classification rule.
- Managerial sign‑off: department managers (with oversight by Information Services) would monitor staff AI use and be required to review generated content prior to official use or publication.
Strengths of RDOS’s proposed approach
- Enterprise‑first posture: limiting staff to Microsoft Copilot keeps activity inside an enterprise technology stack where tenant settings, Microsoft Purview, and DLP rules can be applied — a safer default than allowing public consumer chatbots. Municipal playbooks and recent council policies recommend precisely this tenancy‑bound approach as a risk‑reducing first step.
- Narrow, low‑risk use‑cases: restricting Copilot to drafting/internal editing and summarizing non‑confidential public information focuses on tasks where hallucination risk is manageable with human review, and where the potential upside (time saved) is real and measurable.
- Disclosure and human sign‑off: mandating that AI‑assisted outputs be disclosed and reviewed by managers helps preserve accountability and audit trails. This principle — treating AI outputs as drafts until attested by a human — is a consistent recommendation for public bodies.
- Pilot‑anchored decision: using a controlled pilot to derive policy is the right sequence: pilot → measure → tighten governance → scale only if the controls and benefits hold up under scrutiny. Many other local governments have followed this same pathway.
Critical gaps and risks RDOS should address before adoption
The draft policy is a good start, but the board should insist on operational and contractual confirmations before committing to a larger rollout. Key gaps to close:- Publish the pilot methodology and math
- The $91,000/year saving claim needs a transparent back‑calculation: what hourly rates were used, how were the 15 days measured, was there task substitution (i.e., staff spent saved time on other billable tasks), and how were subscription and governance costs modelled? Without that, the figure is a plausibility claim, not an auditable ROI.
- Confirm tenant configuration and contractual protections
- Municipal risk is not eliminated by using Copilot; it is shifted to correct tenant configuration and procurement terms. Public bodies should obtain explicit contractual commitments (non‑training clauses, data deletion and retention guarantees, audit rights, telemetry commitments) or confirm equivalent protections in their Microsoft licensing agreements. Vendor marketing statements alone are not sufficient.
- Records, FOI and prompt logging policy
- AI prompts, agent outputs and human edits may become public records under access‑to‑information regimes. The policy must define how prompts and AI outputs are captured, retained, redacted and disclosed in response to FOI requests. This needs alignment with existing records retention schedules and Freedom of Information rules.
- Data loss prevention (DLP) and endpoint controls
- Technical controls must prevent staff from accidentally sending classified or PII data to any non‑sanctioned model. This typically requires a mix of tenant DLP rules, endpoint configuration, network filtering, and sensitivity labeling. Blocking consumer AI endpoints network‑wide helps reduce shadow AI risk.
- Training, stewardship and conditional access
- Access should be conditional on role‑based training completion. Appoint departmental “AI stewards” who can vet use cases, coordinate training, and escalate concerns. Issuing licences only after training and stewardship sign‑off reduces misuse risk.
- Operational telemetry, quotas and financial controls
- Copilot and agent features can generate metered costs. The IT team must monitor consumption, set quotas, and tie seat approvals to budget lines to avoid billing surprises. Track KPIs (time saved, error rate, incidents, cost metrics) and publish an annual AI usage statement for transparency.
- Plan for shadow AI
- Policies that ban open consumer tools for official business often spur staff to experiment with consumer models on personal devices. Address this with user‑friendly sanctioned tools, clear DLP, and rapid support for legitimate staff needs to reduce incentives for shadow AI.
Technical and security mitigations (practical checklist)
- Enforce strong identity and access management: MFA, conditional access and least‑privilege roles.
- Configure Microsoft Purview and tenant DLP to block uploads of PII or restricted data to copilot/agent backends.
- Disable risky connectors and web‑grounding for high‑sensitivity accounts until policy is mature.
- Capture and retain logs of prompts, outputs and edits (with redaction rules), and integrate Copilot telemetry into SOC tooling where possible.
- Pilot with a small, instrumented cohort and meter everything: queries, tokens, time saved per task, and the incidence of hallucinations or factual errors.
- Require managerial review and sign‑off for any AI‑generated content used in external communications or decision‑facing documents.
Records management and public transparency — a municipal imperative
When Copilot assists with drafting policy notes, council briefings, or communications, municipalities must define what counts as an official record. AI‑assisted drafts, the prompts used to create them, and subsequent human edits may be discoverable in access‑to‑information requests unless the district explicitly sets retention and redaction rules.Best practice recommendations include:
- Treat prompts and generated content as potentially discoverable unless explicitly redacted and justified.
- Publish a public assurance statement whenever AI materially influences a policy document or a decision that affects residents: what the AI did, who reviewed it, and how residents can request the source material.
- Set and publish retention windows for prompt logs and define redaction workflows for PII in those logs.
If I were advising RDOS: a prioritized action plan before adoption
- Publish the pilot report in full: raw metrics (hours saved), assumptions, identified errors and manager review logs. This allows the board and public to scrutinize the $91k claim.
- Conduct a tenant security and Purview audit (30‑day turnaround) to confirm DLP, connector, and telemetry settings before enabling broader Copilot access.
- Make licences conditional on mandatory training and sign‑offs by departmental AI stewards.
- Insert procurement protections: non‑training clauses, deletion rights, audit access and clear exit provisions in any Copilot‑related agreement. Don’t rely on vendor marketing.
- Pilot expansion only after setting KPI reporting (time saved per task, incidents, human edit rates, cost metrics) and commit to public reporting (quarterly for the first year).
Why this matters for taxpayers and staff
- For taxpayers: narrow, well‑governed use of Copilot can free staff time from repetitive drafting and reallocate it to higher‑value public services — but only if the claimed savings hold up when the district accounts for subscriptions, governance costs, training, and incident response overhead. The $91,000 claim is plausible, but the board should insist on the workbook that produced it so the public can evaluate the trade‑offs.
- For staff: a sanctioned, supported Copilot can reduce administrative burden — yet it requires training, prompt hygiene, and cultural change so that AI is used as a tool not a shortcut that bypasses verification and nuance. Municipal HR and managers should be ready to update workflows and performance metrics accordingly.
Balanced verdict
RDOS’s draft policy is a prudent, conservative first step: it chooses a tenancy‑bound Copilot, narrows permitted uses to low‑risk drafting tasks, requires disclosure and manager review, and grounds the decision in a measured pilot. Those are credible mitigations aligned with municipal playbooks. Yet policy matters most when it is operationalized. The board should withhold a long‑term commitment until the following are provided and validated: the full pilot methodology, a tenant‑configuration attestation, procurement clauses that protect data and non‑training guarantees, a prompt‑and‑output retention policy compatible with FOI obligations, and a funded program for training and monitoring. Without these, the benefits may prove ephemeral and the public risk exposure larger than anticipated.Final practical cautions and takeaways
- The $91,000 annual savings figure reported by local media is a useful headline but not an audit; the board should ask for the source workbook and assumptions behind it before relying on the number in budgeting discussions.
- Microsoft Copilot pricing is material and publicly measurable: at approximately US$30/user/month for Microsoft 365 Copilot, subscription costs will be a modest but recurring component of any municipal AI line item; RDOS should incorporate multi‑year OPEX modelling into budget forecasts.
- The most expensive mistakes are not license fees but data leaks, FOI surprises, and procurement gaps. Those are preventable with a short list of enforceable technical and contractual steps the board can require before approving a wider rollout.
Source: Penticton Western News AI policy coming to Regional District of Okanagan Similkameen - Penticton Western News