Cold Lake Adopts Principles Based AI Policy with Human Oversight

  • Thread Author
The City of Cold Lake’s council unanimously approved a new Artificial Intelligence policy on Jan. 27 that deliberately steers between outright prohibition and laissez‑faire adoption: it gives municipal staff a principles‑based decision framework to use embedded AI features while erecting explicit guards for privacy, records obligations and human oversight.

Four professionals review an AI policy checklist in a city council briefing.Background​

The policy emerged from administration work prompted by the rapid embedding of generative and assistive AI into everyday productivity tools — examples staff cited include Microsoft Copilot‑style features, Adobe’s generative capabilities, and AI‑driven web search enhancements. Rather than naming or banning specific vendors, Cold Lake’s approach creates a short checklist and a set of permitted and non‑permitted uses staff must run through before relying on AI for municipal work.
City leadership framed the policy as a practical response to an operational reality: AI is already present inside the apps people use, and the objective is to allow productivity gains without sacrificing resident privacy or municipal accountability. Kristy Isert, Cold Lake’s general manager of Corporate Services, told council the policy is intended to govern staff use and deliver clarity rather than to freeze technologies out of the workplace.

What the policy actually says (plain summary)​

  • The policy is principles‑first, not product‑first. It asks staff to assess privacy risk, records and retention obligations, the need for human verification, and whether a use would transfer decision authority to an automated system.
  • It distinguishes permitted low‑risk activities (drafting, internal summarization, retrieval assistance where no PII is exposed) from non‑permitted activities (submitting personally identifiable information or confidential files to third‑party models, automated profiling or decisions with material effects on residents).
  • A specific, clarified prohibition now applies to generating photos, images or videos of people rather than banning image generation wholesale — a change made in committee to avoid impeding legitimate inanimate or facility‑related imagery.
Council also directed that mandatory training tied to the new policy will be rolled out for staff; training builds on existing privacy training and will include AI‑specific modules. Deputy Mayor Bill Parker explicitly asked for elected‑official training as well, and administration confirmed materials were under development.

Why Cold Lake’s approach matters: municipal context and competing priorities​

Municipal governments sit at a high‑risk, high‑reward junction for AI adoption. On the upside, retrieval‑grounded assistants and draft‑first workflows can reduce clerical burden, accelerate constituent responses, and unlock institutional knowledge trapped in legacy documents. On the downside, municipal operations involve statutory records regimes, sensitive personal data, and democratic accountability — areas where an AI mistake or an inadvertent data leak can produce legally actionable harm and public distrust.
Cold Lake’s stance — govern, train, pilot, and require human attestation — follows the playbook many local authorities are adopting: favor enterprise‑bound tools, require human‑in‑the‑loop validation, and insist on procurement safeguards before expanding use. That posture reduces the prospect of a policy becoming obsolete when vendors change features and helps keep operational controls aligned with legal obligations.

The image‑generation restriction: a pragmatic guardrail with important edges​

One headline change in Cold Lake’s approved text was the committee‑level clarification that image generation of people is prohibited, rather than an all‑purpose image ban. That targeted restriction is defensible because synthetic images of people can be realistic, identifiable, or weaponized for impersonation and reputational harms. Narrowing the ban preserves legitimate uses such as facility renders, diagrams, maps, or accessibility assets that do not depict identifiable humans.
That said, policy language must answer follow‑up questions to avoid inconsistent enforcement:
  • How is “people” defined? Does it include deceased persons, minors, or staff portraits?
  • Does the restriction cover stylized avatars or clearly fictional characters?
  • Does it prohibit only in‑house generation, or also the incorporation of externally sourced synthetic images of people?
  • Are exceptions possible with documented consent and formal approvals?
Without explicit answers, staff will face ambiguous situations and may default to unsafe workarounds or shadow AI. The policy should therefore contain a short definitional annex and an exceptions pathway managed by an AI governance group.

Technical controls that must follow policy to make words enforceable​

A written policy is necessary but not sufficient. Operational controls convert governance into enforceable protection. Cold Lake’s policy acknowledges embedded AI in Microsoft, Adobe and search tools; the practical next steps are routine but essential:
  • Map where AI features are enabled across cloud services (Teams, Word, Excel, Adobe apps, browsers) and confirm admin‑level controls.
  • Apply Data Loss Prevention (DLP) rules and endpoint restrictions to block or flag uploads containing regulated identifiers or images with faces.
  • Configure tenant‑level logging, Purview retention and prompt‑audit trails so interaction logs are discoverable, redactable and consistent with municipal records obligations.
  • Create sanctioned alternatives to consumer public models so staff do not resort to shadow tools on personal devices.
These are standard operational recommendations for municipal AI governance and reflect common failure modes — notably misconfiguration and shadow AI — that lead to accidental leakage despite good policy language.

Procurement and vendor verification: why “marketing statements” aren’t enough​

Vendors commonly market enterprise products with privacy‑friendly messaging (for example, claiming that enterprise Copilot modes do not use customer prompts to train public models). These vendor statements are meaningful, but they are insufficient as a legal or operational guarantee. Cold Lake must convert vendor promises into contract language: Data Processing Addenda (DPAs), explicit non‑training clauses, deletion/egress rights, audit access and breach‑notification timelines.
Key procurement actions for the City:
  • Require explicit non‑training guarantees or opt‑out terms for any service that could receive municipal prompts or documents.
  • Verify, in writing, data residency and telemetry retention policies for licensed products.
  • Insist on audit clauses permitting the municipality (or an independent third party) to verify tenant settings and telemetry behavior.
  • Make license issuance conditional on signed, enforceable contract clauses rather than on FAQ pages or product whitepapers.
Absent those contractual protections, the city risks long‑term exposure if vendor practices evolve or if telemetry retention creates unexpected discoverability under records or freedom‑of‑information rules.

Training, culture and the human‑in‑the‑loop requirement​

Cold Lake tied the policy to mandatory training: a sensible move. Training should be role‑based and certification‑gated to access, and must cover:
  • Prompt hygiene: what must never be pasted into a model prompt (PII, case notes, health or financial identifiers).
  • Verification standards: what constitutes sufficient human review before publication or decision use.
  • Records handling: how prompt logs and AI‑assisted drafts are stored, redacted and disclosed under records request regimes.
  • Incident reporting: how staff escalate suspected data leaks or hallucination‑driven errors.
Elected officials and managers should receive tailored modules that focus on accountability, FOI exposure and public communication. The administration’s commitment to rolling out training “in the coming weeks” is good — but the City should make licence issuance contingent on completion and maintain a central roster of certified AI stewards.

A practical 90‑day operational roadmap (recommended)​

To translate policy into practice, Cold Lake should prioritize the following measurable steps over the next 90 days:
0–30 days
  • Inventory: map all locations where AI features are enabled across SaaS, endpoint agents and browser extensions.
  • Tenant audit: verify Purview/DLP settings, connector permissions, and retention defaults in Microsoft 365 and Adobe admin consoles.
  • Blocking rules: deploy DLP/endpoint rules that flag or block prompts containing PII or face images directed at public endpoints.
30–60 days
  • Contract remediation: ensure new and renewing AI vendor contracts have explicit non‑training and deletion clauses.
  • Sanctioned toolbox: publish an approved‑tools list and provide centrally managed access to tenant‑bound assistants where needed.
  • Resident notice: publish a plain‑language one‑page explaining where AI is used and how residents can request human review.
60–90 days
  • Training rollout: complete mandatory role‑based modules and certify staff; make further access contingent on certification.
  • Governance body: establish an AI governance committee (IT/security, legal/records, communications, service leads) to review exceptions and DPIAs.
  • Metrics and transparency: define KPIs (time saved, incidents, prompt retention volumes) and commit to periodic (six‑month) public reporting.
This roadmap converts policy language into tangible checkpoints and reduces the chance the policy remains aspirational.

Common risks and how to mitigate them​

  • Shadow AI: Staff using consumer chatbots on personal devices leads to uncontrolled leakage. Mitigation: DLP & network filtering, plus usable sanctioned alternatives.
  • Hallucinations and accuracy failures: Generative systems can invent facts that look plausible. Mitigation: Treat AI outputs as drafts requiring named reviewer attestation, and sample‑check published outputs.
  • Records discoverability: Prompt histories and AI‑edited drafts can become discoverable under FOI and records‑retention regimes. Mitigation: Define retention and redaction standards, and log prompt metadata (user, timestamp, model version).
  • Procurement exposure: Vendor marketing claims aren’t a legal shield. Mitigation: Insert enforceable DPAs, non‑training clauses and audit rights into contracts.
  • Ambiguous image rules: Prohibiting image generation of “people” needs precise definitions to avoid grey areas. Mitigation: Add a definitional annex and formal exceptions process.

Where Cold Lake’s policy is strong — and where it must be tested​

Strengths
  • Principles‑based approach: By focusing on decision principles rather than product lists, the policy is future‑resilient and less likely to be rendered obsolete by vendor updates.
  • Training linkage: Tying mandatory training to the policy improves the odds it will be practiced rather than ignored.
  • Targeted image ban: Narrowing the image prohibition to people is a pragmatic compromise that preserves useful use cases while protecting against high‑risk synthetic person images.
Weaknesses / operational gaps
  • No visible procurement commitments yet: The policy’s protections depend heavily on vendor contract language; council papers do not (publicly) disclose whether current licences include non‑training guarantees. This must be remedied quickly.
  • Execution risk: Without immediate technical audits, DLP rules and endpoint controls, the policy’s words may not prevent accidental leakage.
  • Potential ambiguity in image prohibition: Definitions and exceptions must be spelled out to avoid inconsistent application.
These are not fatal flaws — they are foreseeable next steps that the administration and council can address through the short 90‑day roadmap above.

A final practical checklist for council and administration​

  • Require a tenant security audit and remediation report to be presented to council within 30 days.
  • Make AI access conditional on completion of role‑based training and a signed stewardship acknowledgement.
  • Insert enforceable non‑training, deletion and audit clauses into all AI vendor agreements.
  • Publish a simple, plain‑English public notice describing where AI is used and how residents can request human review.
  • Convene an AI governance committee and schedule policy review every six months with public reporting of KPIs.

Conclusion​

Cold Lake’s policy is a pragmatic, governance‑first response to a rapidly evolving workplace reality: AI is already embedded in the tools municipal staff use, and a balanced framework is better than an impractical ban or an ungoverned free‑for‑all. The strengths of the policy lie in its principles‑based approach, the explicit human‑in‑the‑loop expectation, and the pragmatic narrowing of the image‑generation ban to images of people.
Its success will not be judged by the language alone but by follow‑through: technical audits, enforceable procurement clauses, conditional licence issuance tied to certified training, and transparent public reporting. If Cold Lake moves quickly on the operational checklist outlined above, it can capture real productivity gains while preserving resident privacy, records integrity and public trust. Left unexecuted, the policy risks becoming a well‑intentioned but unenforced guideline that fails to prevent the very harms it seeks to avoid.

Source: LakelandToday.ca City of Cold Lake council approves new artificial intelligence policy for municipal staff
 

Back
Top