• Thread Author
The City of Montréal has quietly turned a classic municipal pain point—finding timely information on services, schedules and rules—into a 24/7 conversational surface by deploying a virtual agent built with Microsoft Copilot Studio that now answers citizen questions across the city’s public website and connects directly to key backend systems. (microsoft.com) (montreal.ca)

A glowing holographic AI robot floats between futuristic dashboards above a digital map.Background​

Montréal is a dense, bilingual metropolis with complex municipal services that range from waste collection and library hours to tax payments and permits. The city’s population (the municipal population is commonly reported above 1.7 million and the broader metropolitan area exceeds four million) means large volumes of routine inquiries that strain call centers and website search functions. (en.wikipedia.org, www150.statcan.gc.ca)
Local governments worldwide are experimenting with conversational assistants to deliver fast answers, reduce simple call volumes and surface public information in more accessible ways. Microsoft’s Copilot Studio is one of the low‑code platforms positioned to make that work practical: it lets municipal IT teams convert website content and forms into knowledge-driven agents, connect to data sources, and combine curated chatbot flows with generative responses. (learn.microsoft.com, microsoft.com)

What Montréal built and why it matters​

A citizen-facing assistant that draws from the city’s knowledge base​

Montréal embedded a conversational assistant on its public website that can answer natural‑language questions about public services, administrative procedures, tax payments, public space maintenance and more. The agent indexes and reasons over the city website’s content—reportedly more than 40,000 pages—and furnishes answers that include links back to the official pages for verification. These capabilities are intended to reduce friction for residents who previously had to sift through menus or call 311 for routine queries. (microsoft.com)

Direct system integrations for action-oriented responses​

Crucially, the assistant is not just a search overlay. The city connected the agent to two internal systems to provide live, contextualized responses:
  • A waste‑collection system that lets the assistant provide a customized pickup schedule based on the user’s location.
  • A facilities system (for example, public libraries) that allows the assistant to return specific location and schedule information on request.
That direct connectivity replaces the prior user experience where a query would only return a link to a web app and force a second search. The agent now serves the answer inline, immediately. (microsoft.com)

Built by the city’s own team using Copilot Studio​

Montréal’s IT department designed and deployed the assistant internally using Microsoft Copilot Studio. The project was led by city staff without outside consultants, and the team leaned on existing APIs rather than building new connectors, which the city says saved time and resources during development. Mohammed Arhab, Solution Architect with Montréal’s IT department, notes the hybrid approach—mixing pre‑built chatbot responses and AI‑generated outputs—helped the city increase accuracy beyond what generative models would deliver alone. (microsoft.com)

Architecture and operational plumbing​

Platform components and telemetry​

The published architecture diagram shows the agent as a Copilot Studio agent grounded in site content and augmented with backend integrations and analytics. The team initially used a customized Power BI dashboard for telemetry and plans to migrate to Copilot Studio’s built‑in analytics capabilities as they mature. Security and governance reviews were performed by the city’s cybersecurity group before production deployment. (microsoft.com)
Key platform elements that underlie the deployment:
  • Copilot Studio for authoring, testing, and publishing the agent.
  • Knowledge grounding from the public Montréal website (the primary content corpus).
  • API connectors to municipal systems (waste collection, facilities).
  • Telemetry and dashboards (initially Power BI, moving to Copilot Studio analytics).
  • Governance controls and an internal cybersecurity sign‑off. (microsoft.com, learn.microsoft.com)

Hybrid response model​

Rather than rely exclusively on freeform generative output, Montréal’s team implemented a hybrid model that blends:
  • Curated, deterministic responses for frequently asked or sensitive topics.
  • Generative answers grounded in website content for broad, open questions.
  • API‑driven responses when up‑to‑date, user‑specific data is required (e.g., waste schedules).
This hybrid pattern is a core capability of Copilot Studio and is explicitly cited by Montréal as a critical factor in raising answer accuracy. (microsoft.com, learn.microsoft.com)

Early results and performance claims​

In operational terms, the city reported encouraging early metrics:
  • The agent handled a substantial share of routine queries within its first four months and received high customer satisfaction scores early in that period. (microsoft.com)
  • More than 85% of conversations are currently handled with generative responses that are grounded in public website content; the remainder are routed to deterministic topics or API calls. (microsoft.com)
  • Montréal reports a 95% accuracy rate in internal testing after customizing the model with custom entities (for example, Canadian postal codes and the list of Montréal’s boroughs). The team expects accuracy to improve further as usage increases and the agent is refined. (microsoft.com)
Caveat: the 95% figure is reported by the city based on internal testing and has not been independently verified. While encouraging, municipal deployments often exhibit different accuracy characteristics in live, diverse user populations than in controlled tests—especially over time and across edge cases—so continued monitoring is essential.

Why Copilot Studio was chosen​

Montréal’s team describes three principal reasons for selecting Copilot Studio:
  • Rapid development in a low‑code environment allowed city developers to build the assistant without external vendors.
  • Out‑of‑the‑box support for grounding generative answers on web content and for combining curated chatbot flows with generative answers—the hybrid model—improved accuracy and predictability.
  • Easy connectivity to existing APIs and Power Platform tooling meant the team did not need to implement new custom APIs to access the city’s systems. (microsoft.com)
From a platform standpoint, Copilot Studio includes templates (such as the “Citizen Services” template) and connector patterns designed specifically for public‑sector scenarios, which helps shorten the path from prototype to live agent. Those templates document both capabilities and limitations, and explicitly warn implementers that AI‑generated content can still contain mistakes and must be governed appropriately. (learn.microsoft.com, microsoft.com)

Strengths and practical benefits​

  • Speed to value: Low‑code authoring and prebuilt templates let municipal teams prototype and publish agents much faster than building chat systems from scratch. Montréal’s internal development without external consultants underlines that point. (microsoft.com)
  • Hybrid accuracy: Combining deterministic responses with grounded generative answers reduces hallucination risk on critical topics and improves repeatability for common queries. (microsoft.com)
  • Actionable answers: Direct API integrations deliver actionable, user‑specific outputs—like customized waste schedules—instead of links, improving the citizen experience. (microsoft.com)
  • Multichannel potential: Agents authored in Copilot Studio can later be extended to other channels (teams, internal tools, and more), increasing reuse and lowering long‑term maintenance overhead. (learn.microsoft.com)
  • Governance and telemetry options: Built‑in (or adjacent) analytics, combined with Power Platform governance controls, means cities can add oversight, audit trails and quotas. Montréal’s move toward Copilot Studio analytics and Power Platform pipelines is a practical example. (microsoft.com, learn.microsoft.com)

Risks, caveats and operational considerations​

Municipal AI deployments are promising, but they are not without measurable operational, security and ethical risks. Montréal’s case and Microsoft’s supporting documentation highlight several areas other cities should not ignore.

Data exposure and permission boundaries​

Agents that connect to Dataverse tables or other data stores can inadvertently expose fields unless access is tightly scoped. Microsoft documentation warns that default table permissions may expose more fields than intended; makers must explicitly restrict field‑level access if sensitive columns exist. Municipalities should audit entity and table permissions to ensure least‑privilege access. (learn.microsoft.com)

Authentication and misconfiguration pitfalls​

Power Pages and token passthrough are convenience features, but incorrect authentication settings—especially on private sites—can create misleading agent behavior (for example, reporting record creation when the underlying operation failed). Microsoft advises configuring Microsoft Entra and validating end‑to‑end authentication flows during staged testing. Montréal’s cybersecurity team reviewed the stack prior to production, which is a recommended pattern. (microsoft.com)

Hallucination and content freshness​

Even when grounded in website content, generative models can err—paraphrasing or omitting key details. The Citizen Services template and Copilot Studio guidance both explicitly note that AI‑generated content can contain mistakes and that teams must maintain review processes. For critical or time‑sensitive answers (tax penalties, emergency notices), provide deterministic responses or link to authoritative documents and include explicit validation steps. (learn.microsoft.com, microsoft.com)

Privacy, uploaded documents and document extraction​

If file upload or OCR features are enabled, document extraction is convenient but error‑prone for low‑quality images, and uploaded documents may contain personally identifying information. Municipalities must implement data retention, consent and DLP policies and include clear privacy notices on the site. These are special considerations when agents accept attachments or PDFs.

Cost, quotas and scaling​

Agents consume messages and model compute, and Copilot Studio deployments are subject to message quotas and metered costs. Administrators should forecast consumption, monitor telemetry, and build throttling or fallback strategies to avoid unexpected billing surprises during spikes (e.g., municipal emergencies, seasonal requests). Microsoft documentation and community reporting emphasize the need to plan for quotas and billing. (cxtoday.com)

Practical checklist and lesson plan for municipal IT teams​

  • Define scope and success metrics
  • Choose initial domains that are high‑volume but low‑risk (e.g., waste schedules, hours of operation).
  • Define CSAT, containment rate, reduction in call volumes and false‑positive thresholds.
  • Source and curate knowledge
  • Identify website sections to ground the model and create authoritative canonical pages.
  • Add curated Q&A pairs for high‑risk topics.
  • Build hybrid flows
  • Use deterministic topics for legal, billing and emergency information.
  • Use grounded generative flows for broad informational queries.
  • Implement identity and least privilege
  • Validate authentication end‑to‑end (token passthrough vs. token-based access).
  • Review table and field permissions in Dataverse or other systems.
  • Test and validate
  • Run controlled pilot with representative citizen queries.
  • Measure accuracy, hallucinations, and edge‑case failures.
  • Set governance and telemetry
  • Configure analytics and dashboards (Power BI or Copilot Studio analytics).
  • Define approval workflows for content updates, model prompts and rollout.
  • Plan for scale and cost control
  • Set message quotas, alerts, and fallback messages when limits are reached.
  • Model seasonal surges and prepare throttling or reduced‑scope modes.
  • Communicate to citizens
  • Clearly label the assistant as automated.
  • Provide fallbacks and escalation paths to human agents (e.g., 311) for complex queries.
  • Post privacy notices describing what data is collected and how long it’s retained.
These steps reflect lessons Montréal operationalized—particularly the emphasis on hybrid responses, custom entities for better recognition, and cybersecurity sign‑offs. (microsoft.com, learn.microsoft.com)

What Montréal’s experience suggests about municipal AI adoption​

Montréal’s rollout demonstrates how modern low‑code platforms reduce friction for public‑sector teams seeking to modernize citizen services. The combination of:
  • content grounding (indexing tens of thousands of pages),
  • API integrations for real‑time, location‑specific answers,
  • hybrid deterministic + generative flows,
  • and internal development using Copilot Studio
creates a pragmatic template for other cities that want to move beyond static FAQs and single‑page search.
That said, Montréal’s published metrics—while promising—should be interpreted with practical caution. Figures such as “85% of conversations are handled generatively” and a “95% accuracy rate” are meaningful operational signals, but they come from the city’s internal testing and early production period; long‑term performance will depend on user mix, seasonal surges, model updates and ongoing governance. Continued monitoring, prompt engineering, and real‑world validation are essential to sustain the early gains. (microsoft.com)

Final analysis: promise balanced with prudence​

Montréal’s deployment is a practical, well‑scoped example of how a major North American city can use Copilot Studio to deliver tangible improvements in citizen experience. The project’s notable strengths are:
  • Pragmatic engineering: leveraging existing APIs and Power Platform tooling to avoid expensive custom integrations.
  • Hybrid design: mixing curated chatbot flows with grounded generative outputs to reduce error rates.
  • Organizational ownership: building the assistant internally enabled faster iteration and retained institutional knowledge. (microsoft.com)
At the same time, the deployment underlines the familiar caveats of generative AI in public services:
  • Governance must lead: privacy, permissions, and authentication require explicit plans and audits before scale.
  • Measurement matters: internal accuracy figures need independent, ongoing validation in production.
  • Cost and quotas are real: message consumption models require forecasting and failover strategies.
Montréal’s plan to expand the assistant—adding more services, pushing analytics into Copilot Studio and building an internal assistant for community communications agents—reflects a sensible incremental approach: start with high‑volume, low‑risk services; learn operational patterns; then broaden scope while strengthening governance and monitoring. For other cities evaluating similar projects, Montréal’s experience is a helpful blueprint: low‑code tooling and hybrid agent designs can unlock fast wins, but municipalities must pair innovation with disciplined security, privacy and telemetry practices to convert early promise into durable public value. (microsoft.com, learn.microsoft.com)

Montréal’s virtual agent is not a finished product so much as a live experiment in modern municipal service delivery—one that already delivers better answers, faster access, and measurable operational relief to human agents. Its early success offers a roadmap for other municipal IT teams, but it also illustrates the operational responsibilities that come with putting generative AI at the front door of public services. (microsoft.com, montreal.ca)

Source: Microsoft City of Montréal supports citizens with a virtual agent built using Microsoft Copilot Studio | Microsoft Customer Stories
 

Back
Top