Calgary Blocks ChatGPT on City Networks: A Guide to Municipal AI Governance

  • Thread Author
The City of Calgary’s decision to block ChatGPT on all municipal networks and devices on Friday, February 6, 2026, is a sharp, public-facing example of how local governments are grappling with generative AI: embracing its productivity benefits while clamping down on perceived data, privacy, and security risks. The ban does not mean the City is turning its back on AI — Calgary’s IT and services teams continue to run machine learning projects, use generative tools in limited, approved forms, and are formally rolling out Microsoft 365 Copilot to staff — but the move to block an external consumer-grade chatbot is a clear signal about where city hall draws the line when it comes to uncontrolled, unassessed AI on official systems.
This article synthesizes the City’s public comments, municipal technology documentation, and vendor disclosures to explain what Calgary’s action means for civic IT governance, what technical realities underlie the decision, and what other cities can learn from the trade-offs between generative AI productivity and municipal data protection.

City Hall team discusses risk assessment, data encryption, and secure access in a high-tech setting.Background: what the City announced and why it matters​

The City’s statement, communicated to media and staff in early February 2026, says ChatGPT was blocked on City networks and devices effective February 6, 2026, because the tool had not completed the City’s mandatory risk assessment process. The City framed the ban as a mitigation step: “Blocking ChatGPT will mitigate the significant risk the use of this external tool introduces for our people, data, and services,” the response said. At the same time, officials emphasized that AI use continues in approved channels — notably Microsoft 365 Copilot — and that the City already employs deep learning in areas ranging from procurement forecasting to pavement-distress detection and large-event crowd counting.
Why this matters: municipalities hold large volumes of personal, operational, and sometimes legally sensitive data. When city employees use consumer-grade generative AI (chatbots, image generators, or public LLM interfaces), those tools often collect conversation data that may be logged, stored, or used to improve models. For any public organization subject to privacy laws and public accountability, the risk that municipal personally identifiable information (PII) or confidential operational data could be ingested by an external AI provider is not merely theoretical — it can trigger legal, financial, and reputational harm. Calgary’s response is therefore consistent with a risk-first posture: block, isolate, and only approve after a formal review.

Overview: the City’s AI posture in plain terms​

  • The City has an internal risk assessment requirement for any external technology used in the organization. Tools must be reviewed for how they collect, save, share, and secure information, and whether they meet privacy and security standards.
  • ChatGPT (the consumer/public OpenAI chatbot) was blocked across City networks and devices on February 6, 2026, because it had not completed that assessment.
  • Microsoft 365 Copilot has been reviewed and approved for City use; Copilot is being rolled out to staff with training and support.
  • The City already uses AI on several fronts — procurement forecasting, pavement distress detection, crowd counting at major events — illustrating that the ban is targeted rather than symptomatic of an AI moratorium.
  • City leadership, including the mayor, has publicly expressed caution and a desire for a formal municipal AI policy to govern use, ethics, and data handling.

Why Calgary blocked ChatGPT: a technical and governance view​

Municipal IT departments typically evaluate external tools against a set of common controls: data residency, encryption, access control, third-party processing policies, auditability, the vendor’s terms of service (particularly clauses about using customer data to train models), and legal/regulatory implications (privacy legislation, procurement rules, record retention). Calgary’s stated rationale fits this template.
Key technical concerns that likely drove the block
  • Data ingestion and model training: Consumer instances of ChatGPT historically accepted user inputs and retained some data for model improvement. For municipal use, anything sent to a public LLM — for example, a citizen’s medical detail or a confidential legal memo — could be used outside the City’s control unless explicitly prohibited by contract or enterprise controls.
  • Lack of enterprise guarantees: Enterprise AI products typically offer contractual Data Processing Addenda, tenant isolation, and explicit contractual promises not to use customer data to train models. Consumer-grade public chatbots often do not offer the same legal or technical assurances.
  • Auditability and retention: Governmental workflows require audit trails and records retention. If a chatbot interaction is ephemeral and not easily auditable, it complicates compliance with public records laws.
  • Cross-border data flow and sovereignty: Data sent to cloud-hosted LLMs may traverse jurisdictions; many municipal leaders are sensitive to where data resides and which foreign laws may apply.
  • Prompt injection and confidentiality leaks: Public LLMs can be vulnerable to prompt-injection attacks or generate outputs that inadvertently reveal stored information or hallucinate facts, which is unacceptable for many civic uses.
Taken together, these factors create a classic governance calculus: until a third-party AI tool can be shown to align with municipal security, privacy, and procurement controls, blocking its use is a defensible mitigation.

What Calgary allows: Microsoft 365 Copilot and approved AI projects​

Calgary’s communications make a crucial distinction: the City is not AI-hostile. Instead, it is channeling AI through platforms that can be governed.
What approved AI looks like in practice
  • Microsoft 365 Copilot: The City reports that most employees have access to Microsoft 365 Copilot and that it has completed an internal review. Copilot’s enterprise offerings are built to operate with tenant isolation, enterprise contractual protections, and data handling controls — features that make it a more manageable option for municipal use when properly configured and licensed. Microsoft’s enterprise documentation and admin controls assert that Copilot interactions on enterprise content are processed within the customer’s service boundary and are not used to train Microsoft’s public foundation models.
  • Targeted machine learning projects: Calgary’s Emerging Technologies page (and other municipal presentations) document use cases such as:
  • Deep learning for crowd counting at large events (e.g., Canada Day, New Year’s Eve),
  • Computer vision for pavement distress and road-sign detection,
  • Procurement forecasting using AI to classify expenses and predict demand,
  • Generative or conversational chatbots that bolster 311-style services, when implemented under secure, assessed frameworks.
Why Copilot is viewed as an acceptable path forward
  • Enterprise Copilot editions provide stronger contractual protections than consumer chatbots, including Data Protection Addenda, customer data commitments, encryption in transit/at-rest, and tenant isolation.
  • Admin controls let IT disallow features, disable web grounding, and audit Copilot interactions — features that align with municipal requirements for oversight and records.
  • Microsoft’s messaging emphasizes that, in enterprise contexts, prompts and responses are not used to train their foundation models, which addresses a central municipal worry about uncontrolled data ingestion.
Caveat: approvals are conditional on configuration and governance. Licensing Copilot is necessary but not sufficient; correct administrative configuration, staff training, and ongoing audit are required to ensure safe use.

The politics: mayoral support and public expectations​

Calgary’s mayor publicly expressed support for the ChatGPT block while also urging the City to develop clear AI policies. His concerns focused on:
  • Personal data: Citizens did not consent to their PII being used to train commercial AI models.
  • Transparency and controls: Municipal residents deserve to know how their data is used and what protections exist.
  • Ethical deployment: Municipal AI should be deployed under clear ethical guidelines that outline allowable use cases, oversight mechanisms, and redress.
This political framing matters because municipal adoption of technology is rarely just a technical choice: it’s a public-policy decision. Elected officials must weigh constituent trust, legal obligations, and efficiency gains. Calgary’s dual-track approach — block consumer chatbots until assessed, but deploy vetted enterprise AI — balances political concerns and operational needs.

How Calgary’s approach fits a wider municipal trend​

Calgary is not alone. Over the last 18 months, education districts, utilities, and city governments have issued temporary bans or restrictions on consumer-grade LLMs while developing governance frameworks. The common elements across jurisdictions:
  • Immediate restrictions on consumer chatbots where risk is highest (student devices, public workstations, networked devices).
  • Rapid evaluation of enterprise-grade AI (Copilot, Google Workspace models with enterprise contracts, Anthropic/Claude enterprise offerings).
  • Creation of internal AI risk assessment processes and cross-functional review teams (IT, legal, procurement, privacy, records).
  • Training rollouts and “AI hygiene” campaigns for staff to prevent accidental PII leakage.
Calgary’s actions mirror this pattern: an organization-wide, procedural approach to assess and approve tools rather than a blanket adoption or rejection of AI.

Strengths of Calgary’s policy and implementation​

  • Risk-first posture: Blocking a high-profile consumer chatbot until it passes a municipal risk assessment reduces urgent exposure to uncontrolled data flows. This is a defensible, risk-limiting decision for a public entity.
  • Selective acceptance of enterprise AI: Approving Microsoft 365 Copilot (with controls) channels AI into an environment that can be audited and contractually constrained — a pragmatic path for productivity gains while protecting data.
  • Cross-functional review: The City’s requirement that IT, corporate security, legal, and supply management review tools is best practice and will produce more robust procurement decisions.
  • Public messaging about values: Articulating an AI Strategy rooted in privacy, transparency, fairness, and accountability sets expectations for ethical deployment and can build public trust — provided the strategy is made concrete and publicly available.
  • Operational AI use cases: The City already demonstrates practical AI value in forecasting and infrastructure monitoring, which helps ground policy in actual civic benefit rather than theoretical debate.

Risks, gaps, and where Calgary should tighten its approach​

  • Transparency gap: The City references an “AI Strategy,” but public-facing, detailed documentation or governance policies were not immediately available. Municipal AI governance only gains public trust if policies, risk criteria, and approved tool lists are published. The City should publish the risk assessment framework and a register of approved AI uses.
  • Vendor promises vs. operational reality: Microsoft and other cloud vendors offer enterprise privacy promises, but implementation is critical. Misconfiguration (e.g., incorrect tenant settings, accidental activation of web-grounding features) can leak data. The City must treat vendor contractual promises as the baseline, and technical controls, monitoring, and audits as the real enforcement mechanism.
  • Staff behavior and shadow IT: Blocking ChatGPT on City networks reduces risk, but employees may use mobile devices, personal subscriptions, or home networks to access consumer AI. Training, monitoring, and administrative policies must address bring-your-own AI scenarios and multi-account sign-ins.
  • Procurement and legal compliance: Municipal procurement processes can be slow; the City must reconcile the need for thorough legal review with the fast-moving nature of AI product updates and vendor patch cycles.
  • Record preservation and public archives: If AI assists with citizen communications or content generation, the City must ensure records are preserved under municipal archives and freedom-of-information rules. Automated chat logs, model outputs, and prompt histories may all require retention policies.
  • Third-party dependencies: Some approved tools depend on upstream models or services. The City must map data flows end-to-end and verify that downstream subcontractors and cloud regions meet policy requirements.

Practical checklist for municipalities considering similar steps​

  • Create a cross-functional AI review board including IT security, legal, privacy, records, and procurement.
  • Define and publish an AI risk assessment framework with objective criteria:
  • Data classification thresholds for allowable inputs (e.g., level 1–4).
  • Encryption, data residency, and audit requirements.
  • Contractual requirements (DPA, data-use clauses, model training restrictions).
  • Maintain an approved tools registry and a clear process for emergency blocking where risk is detected.
  • Require vendor attestations that enterprise product usage will not train public models or leak customer data, and then test those claims via configuration checks and audits.
  • Deploy staff training and AI hygiene guidance: what staff may never paste into any chatbot; how to use Copilot safely; how to handle citizen data.
  • Configure admin controls proactively: disable web grounding, set DLP/sensitivity labels, enable full auditing, and enforce conditional access.
  • Plan for public transparency: publish policy, approved use cases, and annual audits or impact assessments.

Technical verification: checking the central claims​

  • GPT model status: OpenAI’s product and release notes indicate that GPT-5 and subsequent versions were deployed across ChatGPT product tiers in 2025–2026, with newer iterative releases appearing as “GPT‑5.1” and “GPT‑5.2.” OpenAI’s enterprise and product documentation describe model variants (Instant/Thinking) and rollout timelines. This indicates the LiveWire statement that ChatGPT operates on GPT‑5-class models aligns with vendor product status as of early 2026.
  • Copilot enterprise assurances: Microsoft’s enterprise documentation and official admin guidance consistently state that Microsoft 365 Copilot interactions on enterprise content are processed within the tenant’s service boundary and are not used to train Microsoft’s foundation models. Microsoft also exposes admin controls for multiple-account access, DLP integration, and auditing. These technical and contractual measures explain why many organizations prefer Copilot for enterprise use over public chatbots.
  • Municipal AI use cases: The City of Calgary’s own emerging technologies documentation describes uses of deep learning for crowd counting, pavement distress detection, and procurement forecasting — confirming that Calgary is not abstaining from machine learning and generative tools broadly, but is instead controlling which tools and how they are used.
Where claims were single-sourced and require caution
  • The specific phrasing and internal details of Calgary’s risk assessment process and the exact contractual terms the City obtained from Microsoft (for Copilot) are not publicly published in full. Those remain internal procurement and legal artifacts. Readers should therefore treat statements about Copilot’s approval as accurate as reported by the City, but understand the final safety depends on how Copilot was configured and what contractual assurances were negotiated.

Recommendations: how Calgary and other cities can strengthen AI governance​

  • Publish the AI Strategy and the risk assessment criteria. Transparency is the single best tool for public trust.
  • Conduct and publish a red-team-style assessment of approved AI tools. A public, anonymized summary of findings and mitigations will reassure residents and watchdogs.
  • Introduce a citizen-facing AI use register: list where AI is used, what data classes it processes, and what controls are in place.
  • Mandate no PII in consumer chatbots as an immediate behavioral rule, accompanied by technical enforcement where possible (network blocks, DNS, content filters).
  • Invest in technical auditing tools that continuously monitor for unexpected data flows to third-party AI endpoints.
  • Establish vendor monitoring for model changes and deprecation: AI vendors often retire or upgrade models; contractual clauses should require notification of changes that could affect data handling.

Conclusion: a defensible balance — but the hard work is only beginning​

Calgary’s decision to block ChatGPT on municipal networks on February 6, 2026, is a defensible, low-regret move for a public organization that prioritizes privacy, security, and public trust. The ban is not a repudiation of AI; the City continues to deploy AI strategically where controls and contracts make it safe to do so, notably with Microsoft 365 Copilot and targeted machine-learning projects.
However, blocking a consumer chatbot is only the first step in a longer governance journey. The hard work — creating published policies, ensuring technical configurations match contractual promises, training staff, and closing shadow-IT gaps — is what will determine whether Calgary’s AI program truly balances innovation with accountability.
For other cities watching Calgary’s example, the takeaway is clear: treat consumer generative AI as a high-risk technology until it can be contractually and technically constrained; embrace enterprise-grade AI where it demonstrably meets municipal controls; and prioritize public transparency so residents can see how AI is helping — and how it is being kept in check.

Source: LiveWire Calgary City of Calgary blocks ChatGPT use on city devices and networks - LiveWire Calgary
 

Back
Top