The Regional District of Okanagan‑Similkameen (RDOS) is preparing a formal artificial intelligence policy that would tightly limit staff and contractor use to a single, enterprise‑managed product — Microsoft Copilot — after a short summer pilot that staff say produced measurable productivity gains and a projected annual saving figure that the board will now be asked to vet. The staff report frames the move as a cautious, security‑first rollout: licenses granted only by department managers, usage restricted to drafting and routine editing of non‑confidential material, and a mandatory human review requirement for any AI‑generated final products. The proposal and its headline claims were summarized in local reporting and reproduced in staff and community notes ahead of board consideration.
Municipal and regional governments across Canada and elsewhere have adopted a conservative, pilot‑first posture toward generative AI: small trials, limited use cases, then policy and procurement changes before any wider roll‑out. RDOS followed that pattern with an internal study in June–July 2025 that staff say involved 16 participants and focused on Copilot as the enterprise option. According to the public summary, the pilot used Copilot for drafting correspondence, summarizing public information, and routine document editing; staff concluded that enterprise Copilot is preferable to public consumer models because it can be configured to operate inside the organisation’s Microsoft 365 tenancy and therefore makes it easier to apply data loss prevention (DLP), access controls, and other tenant‑level protections. RDOS staff further reported an expectation of “significant productivity gains” and offered an estimate of cost savings when the program scales — a figure that has drawn scrutiny because the public reporting does not include the underlying workbook or methodology used to convert time savings into dollars. Community discussion threads circulating preparatory analysis echo that caution and urge the board to require the pilot workbook and tenant attestation before approving a long‑term commitment.
Before awarding a long‑term mandate, the board should insist on verifiable evidence: the pilot workbook with raw metrics, a tenant configuration attestation, procurement clauses that convert vendor marketing into contractual guarantees, and a published records and FOI policy that explains prompt and output treatment. Without these operational confirmations, the risk of FOI surprises, tenant misconfiguration, or unforeseen OPEX creep remains material.
RDOS has an opportunity to demonstrate how small‑scale, governance‑first pilots can be translated into defensible municipal AI practice. The difference between a promising experiment and a trustworthy program is not the vendor or the feature set — it is the board’s insistence on transparent metrics, enforceable contracts, and the operational work required to make controls stick.
Source: Castanet Artificial Intelligence policy in the works at Regional District of Okanagan Similkameen
Background / Overview
Municipal and regional governments across Canada and elsewhere have adopted a conservative, pilot‑first posture toward generative AI: small trials, limited use cases, then policy and procurement changes before any wider roll‑out. RDOS followed that pattern with an internal study in June–July 2025 that staff say involved 16 participants and focused on Copilot as the enterprise option. According to the public summary, the pilot used Copilot for drafting correspondence, summarizing public information, and routine document editing; staff concluded that enterprise Copilot is preferable to public consumer models because it can be configured to operate inside the organisation’s Microsoft 365 tenancy and therefore makes it easier to apply data loss prevention (DLP), access controls, and other tenant‑level protections. RDOS staff further reported an expectation of “significant productivity gains” and offered an estimate of cost savings when the program scales — a figure that has drawn scrutiny because the public reporting does not include the underlying workbook or methodology used to convert time savings into dollars. Community discussion threads circulating preparatory analysis echo that caution and urge the board to require the pilot workbook and tenant attestation before approving a long‑term commitment.Why RDOS wants a Copilot‑only policy
The vendor‑tenancy tradeoff
Staff argue the core benefit of a Copilot‑only policy is governance: Copilot integrates with Microsoft 365 and can be constrained by tenant settings (such as Purview DLP and conditional access), which reduces some risks associated with public consumer models that send prompts outside the organisation. Microsoft’s documentation confirms enterprise Copilot options are designed to inherit tenant security, compliance and data protection controls, and that customer data handling settings (including whether prompts are used for product improvement) can be configured by tenant administrators. That design is a material advantage for public bodies that already rely on Microsoft 365 for email, file storage and identity.Narrowing use cases to reduce exposure
The proposed RDOS guidelines restrict allowed AI activities to tasks judged low‑risk: drafting correspondence and internal reports, summarizing non‑confidential public information, and grammar/formatting assistance. This work‑grounded approach — using AI as a drafting aid rather than an autonomous decision maker — aligns with widely recommended municipal practice that treats generative AI as an augmenting tool, not a substitute for human responsibility.Managerial control and staged rollout
Granting licence approval to department managers creates a gatekeeping step that helps avoid blanket exposure and creates an audit trail for who had access, when. It also aligns licences to demonstrated business need and budgetary responsibility rather than making Copilot universally available by default.Verifying key technical claims and numbers
Pilot size, timeframe and productivity claims
The public article reports a June–July 2025 pilot with 16 participants and claims of measurable time savings and a projected cost saving when scaled. These are plausible outcomes for writing‑heavy tasks — real organisations commonly report time savings from automated drafting — but the RDOS reporting published to date does not include the detailed methodology (hourly rates, how time saved was recorded, annualization assumptions, or whether saved time equates to cost avoidance versus redeployment). Without the workbook or raw pilot metrics, the headline saving must be treated as directional rather than auditable. The board should require the pilot workbook and a clear line‑by‑line calculation before accepting the number for budgeting.Copilot data handling and residency
Microsoft’s enterprise documentation states that Copilot interactions can be configured to respect tenant‑level controls and that customer data is protected under Microsoft’s data commitments; in many commercial Copilot offerings prompts and responses are not used to train Microsoft’s foundational models unless a tenant explicitly opts in to data sharing. Microsoft also documents admin controls for data sharing and the ability to choose prompt evaluation regions for certain Copilot products. Those product assurances are important, but they are not a substitute for contract language: municipalities should secure explicit procurement clauses (non‑training guarantees, data deletion rights, audit and export provisions) in any Copilot agreement where legal risk or jurisdictional residency is a concern.Licensing cost assumptions
Public Microsoft pricing for Copilot has shifted since initial announcements: historically, enterprise Copilot was communicated at roughly US$30 per user per month in broad messaging, while specific business bundles and promotional offers have shown varied list prices for SMB and enterprise offerings. Any RDOS cost‑benefit must use current, documented procurement quotes from Microsoft or its reseller and include multi‑year OPEX modelling (licence fees, training, monitoring, and incident response). The seat cost is a recurring operating expense and can materially affect the net benefit once governance overhead is factored in.Strengths of the RDOS approach
- Enterprise-first posture: Sanctioning a tenancy‑bound Copilot reduces uncontrolled use of public consumer models and allows administrators to apply DLP, sensitivity labels and conditional access — concrete technical tools to mitigate leakage risk.
- Targeted, low‑risk use cases: Limiting AI to drafting and summarization keeps use where factual accuracy issues (hallucinations) are less likely to create immediate harm and where human review can catch problems.
- Managerial gatekeeping: Requiring department managers to grant licences helps align access with business need, budget ownership and role‑based responsibilities.
- Human‑in‑the‑loop requirement: Mandating that all AI‑assisted content be reviewed before finalisation preserves human accountability and preserves an auditable sign‑off trail, an essential safeguard for public records.
Unresolved risks and operational gaps
Despite the conservative tone of the draft policy, several operational issues require clarification before approval:- Opaque ROI mechanics: The headline saving figure is not supported by a public workbook. The board should request the pilot’s raw metrics, the hourly rates and the method of extrapolation used to produce any annualised savings number.
- Records, FOI and prompt retention: AI prompts, agent outputs and subsequent human edits may be subject to Freedom of Information and records retention rules. The policy must define whether prompts and AI drafts are considered official records, how they will be retained, redacted for PII, or exported in response to access requests. Canadian federal guidance on automated decision‑making underscores the need for explicit transparency and audit trails for public‑sector AI use and provides a model for impact assessments and peer review where applicable.
- Procurement and contractual protections: Product literature and marketing claims are not legal commitments. Municipal procurement should secure enforceable contract language — non‑training clauses, deletion rights, telemetry export and audit provisions, and clear exit terms — rather than rely on general vendor assurances.
- Tenant misconfiguration risks: Enterprise controls only work when correctly configured. A tenant audit (Purview, DLP, connectors, logs, retention settings) should be performed and certified before seats are broadly provisioned.
- Shadow AI and human behavior: Restricting official tools often spurs staff to experiment with consumer models on personal devices. Policies must be accompanied by accessible, sanctioned tools and rapid support to reduce the incentive for staff to use unsanctioned AI.
- Hallucination and factual accuracy: Even in low‑risk drafting contexts, generative AI can produce plausible falsehoods. Use‑case limits and mandatory verification steps are necessary but must be backed by training and KPIs that measure hallucination rates and correction burdens.
Practical technical checklist for the board to require before approval
- Conduct and publish a tenant security audit (30 days): Purview configuration, DLP rules, connector settings, prompt logging and telemetry exportability.
- Insist on procurement addenda with:
- Non‑training clauses (vendor must not use tenant prompts or documents to train models without explicit consent).
- Deletion and export rights for prompts, responses and related telemetry.
- Audit and compliance access rights.
- Clear exit and data‑return obligations.
- Implement role‑based access controls and make licences conditional on mandatory, recorded training completion.
- Configure DLP and sensitivity labelling to block uploads of PII and restrict Copilot from processing high‑sensitivity document classes.
- Define a records policy: set retention windows for prompts and outputs, redaction workflows for PII, and clear guidance on what is discoverable in FOI requests.
- Set operational KPIs for the pilot expansion: time saved, cost per seat, incidence of hallucinations, human edit/verification time, and incident counts.
- Publish a short public assurance statement whenever AI materially influences policy documents or decisions, explaining what the AI did and who reviewed it.
A prioritized action plan (recommended)
- Publish the full pilot workbook and methodology to the board and public for independent review.
- Commission a 30‑day Purview/DLP/tenant‑configuration attestation from IT and an external audit firm where appropriate.
- Amend procurement conditions to include explicit non‑training, deletion and audit clauses; secure written confirmation from Microsoft/reseller.
- Release a plain‑language staff guidance sheet and a resident‑facing assurance notice explaining where AI is used and how records will be handled.
- Make license issuance conditional on completion of a mandatory training module and a signed “AI Acceptable Use” acknowledgement.
- Expand the pilot in controlled phases, measuring KPIs quarterly and publishing results publicly during the first year.
How to think about cost and value realistically
- Licence fees are only part of the cost picture. Include training, governance, audit, and potential incident response costs in OPEX modelling.
- Avoid simple annualization of a single month’s savings without controls for seasonal workload, task substitution and variance by role.
- Track not just raw hours saved but net value — are staff using saved time for higher‑value work (net productivity) or simply finishing earlier (which may not translate into budget savings)?
- Build a multi‑year budget that anticipates licence renewal increases and additional governance staffing if the program scales.
Records management and public trust: the municipal imperative
Municipalities often face FOI requests and archival obligations. When AI assists with drafting council reports, communications or policy notes, the prompts and generated drafts may become discoverable unless retention policies and redaction practices are explicit. Best practice for public bodies includes:- Treating prompts and outputs as potentially discoverable unless a documented redaction and legal justification exists.
- Retaining minimal prompt logs for auditability while applying robust redaction for PII.
- Publishing an accessible summary when AI materially influences council reports or decisions: what AI did, who reviewed it, and how members of the public can request the materials.
Balanced verdict
RDOS’s proposed policy — whitelist Microsoft Copilot, restrict use to low‑risk drafting tasks, require managerial licence approvals and enforce human sign‑off — is a prudent starting point. It aligns with broader municipal practice in treating enterprise Copilot as the safer initial option compared with unrestricted public consumer models. But policy language is only as good as implementation.Before awarding a long‑term mandate, the board should insist on verifiable evidence: the pilot workbook with raw metrics, a tenant configuration attestation, procurement clauses that convert vendor marketing into contractual guarantees, and a published records and FOI policy that explains prompt and output treatment. Without these operational confirmations, the risk of FOI surprises, tenant misconfiguration, or unforeseen OPEX creep remains material.
Final recommendations (concise)
- Require the pilot workbook and methodology be published to the board and the public before any budgetary reliance on the claimed savings.
- Approve only a time‑limited pilot expansion with strict KPI reporting and a six‑month formal review.
- Make licences conditional on mandatory training, a signed acceptable‑use agreement, and a departmental steward responsible for compliance.
- Obtain explicit contractual protections from the vendor on data handling and non‑use for model training.
- Publish a short public statement describing where and how AI is used in RDOS communications and policy drafting.
RDOS has an opportunity to demonstrate how small‑scale, governance‑first pilots can be translated into defensible municipal AI practice. The difference between a promising experiment and a trustworthy program is not the vendor or the feature set — it is the board’s insistence on transparent metrics, enforceable contracts, and the operational work required to make controls stick.
Source: Castanet Artificial Intelligence policy in the works at Regional District of Okanagan Similkameen