Denver Elevates CIO to CAIO to Accelerate Responsible City AI

  • Thread Author
Denver’s tech leadership has quietly been recast: the city’s chief information officer, Suma Nallapati, now carries the formal title Chief Artificial Intelligence and Information Officer (CAIO) — a change the administration says is meant to accelerate responsible AI adoption across city services while keeping human-centered governance front and center.

Background / Overview​

The title change was announced as Denver hosted the city’s second DenAI Summit, a two‑day convening designed to bring academics, vendors, civic leaders and city staff together to translate AI experiments into operational programs. The move formalizes a trend seen in other U.S. cities: folding AI strategy and governance into the top technology office to shorten decision cycles and centralize accountability. Nallapati — who joined Denver as CIO in 2023 after roles at the state level and in the private sector — will now lead an explicit, city‑level effort to scale AI projects, negotiate vendor partnerships, and coordinate cross‑agency governance. The city says the expanded role started on Sept. 29 and that it will help align procurement, privacy and operational rollout across departments. Why this matters: bundling AI responsibility with the CIO position shortens the feedback loop between policy and implementation. It signals to vendors and departments that AI initiatives will be centrally vetted for ethics, equity, security, and service impact before scaling.

What Denver has already built: early deployments and measurable outcomes​

Sunny: a frontline AI assistant​

Denver’s highest‑visibility AI project to date is Sunny, a multilingual chatbot used as a 311 channel to route questions, surface services, and provide basic transactional assistance 24/7. City reports show Sunny handled more than 102,000 resident engagements between Jan. 1 and Sept. 8, with support for dozens of languages and reported customer satisfaction scores in the high‑80s to 90s. The bot is also intended to reduce live‑agent load by absorbing routine interactions. The Sunny rollout dates back to earlier work that used an AWS‑based chatbot platform, and local reporting has noted the city invested six figures to implement the initial capability. The city has also acknowledged the risk of inaccuracies typical of automated assistants and frames Sunny as a first‑contact augmentation rather than a legally authoritative source.

A prequalified vendor bench and tactical procurement​

In spring 2025 Denver issued an RFP to create a prequalified bench of AI vendors: a curated pool designed to reduce procurement friction and provide departments with vetted partners who meet security, scalability and equity criteria. The city’s intent is to shorten time‑to‑pilot while preserving control over vendor risk, data ownership and audit rights. Proposals for that effort were solicited in April and have been evaluated against technical capability, innovation potential, compliance posture and cost.

Why a title matters: operational and governance implications​

Centralized leadership — benefits​

  • Faster coordination across agencies. A single executive responsible for both information technology and AI can align data strategies, integrations, and operational pilots without months of interdepartmental negotiation.
  • Unified governance. Centralizing AI policy helps ensure consistent standards for auditing, explainability, and data handling — critical where public‑sector decisions intersect with resident outcomes.
  • Procurement efficiency. A CAIO who signs off on vendor benches and RFP frameworks reduces repeated procurement cycles and encourages vendor accountability through standardized contracts and risk assessments.
These are not theoretical. Denver’s public materials and contemporaneous reporting describe exactly this intent: to move from isolated pilots to repeatable, vendor‑backed production patterns, while retaining city control of data and oversight.

Centralized leadership — risks and tradeoffs​

  • Concentration of authority. Centralization can create single points of failure if the office lacks sufficient staff or independent oversight; the political durability of a CAIO depends on continued council and mayoral support.
  • Vendor lock‑in pressure. Prequalified benches speed delivery, but without strict egress, portability and provenance clauses they can entrench dependencies.
  • Cross‑cutting capacity demands. Embedding AI into permitting, public safety, and resident services requires sustained engineering, data science, legal and procurement resources that many municipalities under budget pressure struggle to retain. Colorado political reporting indicates Denver has faced large budget shortfalls, a context that amplifies both urgency and risk.
Where possible, Denver’s messaging stresses human‑centered and ethical AI — but implementation details and sustained capacity will determine whether the promise becomes durable modernization or brittle automation.

Technical posture and vendor ecosystem: what Denver is adopting and evaluating​

Enterprise tools and vendor partnerships​

City officials publicly named partnerships and pilot targets that include Microsoft Copilot and exploratory work with Salesforce’s agentic AI capabilities. The approach is deliberately hybrid: leverage commercial AI surfaces where they add productivity, while also building custom agents for city‑specific needs (for example, licensing and permitting workflows).

What trustworthy deployment requires (and what Denver has signaled)​

Across announcements, Denver stresses several operational guardrails:
  • Data ownership. The city says it will retain ownership of municipal data and audit vendor algorithms rather than cede control.
  • Security and privacy checks. Vendor proposals and internal pilots are to be evaluated by city security and privacy teams before rollout.
  • Metrics and resident impact. The city is tracking customer satisfaction, interaction volumes absorbed by automated channels, and time‑saved measures as part of go/no‑go criteria.
These are sensible posture statements, but the hard work is in the operationalization: cataloging datasets, enforcing tenant‑scoped access, and building observability into model inputs and outputs.

What Denver’s CAIO will need to operationalize — a practical checklist​

Denver’s public statements sketch strategic goals; operational reality demands concrete implementation. For municipal IT and Windows‑centric enterprise teams evaluating similar programs, these actions form a practical checklist:
  • Catalog and classify datasets by sensitivity, legal exposure, and public‑records risk.
  • Require explicit contract clauses on model provenance, training‑data reuse, and data egress.
  • Enforce identity‑first access controls (phishing‑resistant MFA, role‑based access, tenant scoping).
  • Instrument model and agent telemetry: inputs, outputs, drift metrics, and human‑in‑the‑loop overrides.
  • Adopt a staged rollout: sandbox -> pilot cohort -> holdout evaluation -> controlled scale.
  • Maintain public transparency: publish non‑proprietary summaries of audits, performance, and equity testing.
Each of these steps is consistent with authoritative AI governance guidance (for example, the NIST AI Risk Management Framework) and with vendor‑provided security tooling such as Microsoft’s Copilot security and governance controls. Denver’s statements indicate it intends to follow similar patterns, but the degree of enforcement will determine outcomes.

Security, compliance and vendor controls — how to keep municipal AI safe​

Built‑in vendor protections are necessary but not sufficient​

Vendors like Microsoft publish detailed guidance and built‑in mitigations for AI features — from prompt‑injection defenses and tenant‑scoped access to Purview‑backed sensitivity labeling and double‑key encryption options. These enterprise controls help reduce blast radius and keep model access aligned with existing permissions. But municipalities must still implement complementary administrative policies and operational monitoring.

Monitoring and continuous governance​

Microsoft’s Copilot and Copilot Studio documentation emphasize a monitor‑and‑optimize lifecycle: auditing prompts and responses, tracking blocked queries, using SIEM and analytics for anomalous activity, and conducting periodic governance reviews. Denver’s approach to vendor vetting and internal audit suggests similar monitoring aims — a positive signal, but one that requires ongoing investment in staff and telemetry.

Equity, transparency and public trust — the political ledger​

Denver’s rhetoric places equity and resident‑centered outcomes at the heart of the CAIO role. The DenAI Summit and related events emphasize public engagement, AI literacy, and multidisciplinary review of use cases. Those are important trust‑building practices. But public trust is fragile. Chatbots and agentic systems can amplify bias, produce incorrect or legally risky advice, or create opaque decision flows. Denver has signaled that Sunny will not be the final arbiter of legal or high‑stakes decisions; Axios reporting of its original Sunny launch highlights both the promise and the city’s public disclaimers about accuracy — a useful cautionary precedent. Municipal leaders must publish redress pathways, external audits, and measurable equity testing to maintain legitimacy.

Fiscal realities: scaling AI under budget constraints​

Colorado reporting about the mayor’s budget round indicates Denver has been navigating sizable fiscal pressures while pursuing ambitious technology goals. Embedding AI into the CIO’s remit can be a cost‑saving story if pilots demonstrably reduce time‑to‑service or automate high‑frequency, low‑complexity tasks — but the fiscal calculus depends on licensing, hosting, staffing and long‑term maintenance costs. Key budget considerations for Denver and peer cities:
  • Licensing and per‑user pricing for productivity copilots can create substantial recurring costs at scale.
  • Vendor‑hosted model usage (inference and grounding) often has metered costs that must be modeled against estimated time‑savings.
  • Staffing an internal governance and AI operations team — auditors, data engineers, model ops, privacy counsel — is not optional if the city wants to maintain control and auditability.
  • Early, well‑scoped pilots and a prequalified vendor bench can reduce procurement friction, but not necessarily total cost of ownership unless portability and vendor competition are enforced.
These tradeoffs underline the importance of transparent TCO modeling before broad rollouts.

Independent checks and areas that need public clarity​

Several city claims and program details are verifiable in public reporting, but a few items require fuller public documentation to avoid ambiguity:
  • Exact staffing plan and budget for the expanded CAIO office: public reporting confirms the title change and strategic intent, but long‑term funding lines and staffing levels were not fully detailed in early announcements. This matters for operational capacity and resilience.
  • RFP evaluation criteria and red‑team/audit commitments: the April RFP established a vendor bench process, but ongoing public reporting should disclose the evaluation framework and audit test plans used to approve vendors for production use.
  • Public‑records and FOIA handling for AI‑generated content: municipalities must define how AI outputs are treated under public records regimes; Denver has made high‑level commitments to openness, but practical policies deserve public publication and citizen guidance.
Where claims are unclear or only partially documented in press reporting, they should be flagged for auditors and oversight committees — the city’s own transparency commitments create an expectation that these details will be accessible.

What other cities can learn from Denver’s approach​

Denver’s strategy — combine a high‑profile summit to build a local ecosystem, create a prequalified vendor bench, and elevate AI oversight to the CIO level — offers a replicable playbook for other governments. The most important lessons are practical:
  • Invest in public engagement early to build literacy and solicit use‑case ideas that reflect resident priorities.
  • Start with bounded pilots that have measurable KPIs and holdout groups for impact attribution.
  • Require vendor contracts to include audit, portability and privacy guarantees before production scale.
  • Pair technical pilots with workforce planning so employees are reskilled and role changes are managed transparently.
These are not novel prescriptions, but they are essential if AI is to be a force for service improvement rather than a source of public friction.

Conclusion​

Denver’s decision to expand the CIO role to Chief Artificial Intelligence and Information Officer is a pragmatic move that recognizes AI is no longer an exploratory add‑on — it’s a capability that needs governance, procurement discipline and operational heft. The city has the right initial pieces in place: a public summit to drive ecosystem engagement, an RFP to prequalify vendors, and frontline pilots such as Sunny that demonstrate measurable resident impact. At the same time, success will hinge on the nuts‑and‑bolts: sustained staffing, transparent audit and accountability structures, robust contract language to avoid vendor lock‑in, and rigorous monitoring to detect bias, drift and misuse. Denver’s rhetoric of human‑centered AI is promising, but the proof will be in the operational details — the dashboards, the audit reports, and the city’s willingness to publish the tough tradeoffs it faces while balancing fiscal pressure and public expectations. For municipal technologists and Windows‑oriented IT teams watching closely, Denver’s experiment offers both a template and a caution: centralize authority to drive coherence, but invest equally in transparency, governance tooling, and continuous monitoring — the essential infrastructure that separates pilot stage novelty from durable public value.
Source: GovTech How a New Title for Denver's CIO Helps Power AI Work
 
Today’s Copilot Fall Release is Microsoft’s clearest statement yet that the company intends to make AI an ambient, human-centered companion across Windows, Edge, and mobile — not just another flashy feature. The update bundles an expressive avatar called Mico, collaborative Copilot Groups for up to 32 people, deeper long‑term memory and connectors to your files and calendars, a health‑grounded Copilot for safer medical answers, and tighter agentic integrations in Edge and Windows that let the assistant summarize tabs, complete multi‑step tasks, and surface resumable browsing “Journeys.” This is a functional shift: Copilot is moving from a one‑off query tool toward a persistent, multimodal assistant designed to save time, preserve context, and stay aligned with people — but it also amplifies tradeoffs around privacy, governance, and reliability.

Background​

Microsoft’s Copilot program has evolved quickly from sidebar answers to a family of experiences spanning Microsoft 365, Windows, Edge, and mobile apps. The Fall Release is the consolidation point for several experimental builds — voice wake words, vision features, agentic Actions, and previewed avatars — into a more coherent product push aimed at making “every PC an AI PC.” The company frames this as human‑centered AI: assistive, opt‑in, privacy‑sensitive, and explicitly designed to enhance human judgment rather than replace it. Early preview reporting and Microsoft’s announcement materials make clear the rollout is staged, U.S.‑first, and subject to regional or SKU restrictions.

What’s in the Fall Release: Feature map and quick take​

  • Mico — an optional animated avatar that provides visual feedback in voice interactions and study flows.
  • Copilot Groups — shared Copilot chats designed for collaboration, supporting up to 32 participants and link‑based invites to join a single shared workspace.
  • Memory & Personalization — long‑term memory with an intent‑driven model, in‑app memory controls, and the ability to recall personal details across sessions.
  • Connectors — optional links to OneDrive, Outlook, Gmail, Google Drive, Google Calendar and more so Copilot can reason over your files and events (explicit consent required).
  • Copilot for Health — responses grounded in vetted health publishers (Microsoft cites partners such as Harvard Health) and a Find‑Care flow to surface clinicians matched by specialty, language, and location.
  • Edge & Journeys — tab reasoning, Actions that can perform multi‑step browser tasks (booking, form‑filling), and Journeys that turn past browsing into resumable storylines.
  • Windows integration — “Hey Copilot” wake word, Copilot Home with quick access to recent files and conversations, Copilot Vision for guided, contextual help, and text/voice interaction options across the OS.
Each of these elements is designed to make Copilot more social, persistent, and action‑capable — a meaningful product shift that changes both the way people interact with PCs and the kinds of data the assistant may access.

Mico: design, intent, and the Clippy shadow​

A persona engineered for voice and study​

Mico is an intentionally non‑human, stylized avatar that appears when users interact with Copilot by voice or enter Study and Learn experiences. It reacts to tone and context with color and motion, gives non‑verbal cues while the model listens or thinks, and is configurable or opt‑outable for users who prefer a minimal UI. Microsoft positions Mico as a social cue for talking to your computer — a visual anchor that reduces dialog awkwardness and makes voice interactions feel more natural.

The Clippy question: nostalgia as UX lever, but optional​

Designers are explicitly aware of Clippy’s long history as an intrusive assistant. Mico’s lesson is the opposite: purpose‑first personality and clear opt‑in controls. That said, preview reporting notes playful easter eggs in test builds that nod to Clippy — a reminder that nostalgia can be a double‑edged sword when you’re trying to be both lovable and unobtrusive. Treat those preview interactions as observed behavior rather than guaranteed product features until Microsoft documents them.

Copilot Groups and Imagine: AI as a social tool​

Shared sessions and collaborative creativity​

Copilot Groups extends the assistant into shared, persistent threads where up to 32 people can join a single conversation via an invite link. Within a group, Copilot can summarize threads, tally votes, propose options, and split tasks — effectively becoming a real‑time facilitator for planning, study sessions, or collaborative brainstorming. Groups lowers friction for ad‑hoc collaboration by allowing anonymous or link‑based joining and makes the assistant a third, impartial participant in the conversation.

Imagine: remixable AI creativity​

Paired with the collaborative space is an Imagine canvas where AI‑generated ideas can be browsed, liked, and remixed. This turns generative outputs into social objects that others can adapt, fostering creative amplification rather than isolated outputs — an explicit design to encourage group creativity over solitary consumption.

Memory, connectors, and the new persistence model​

Intent‑driven memory: second‑brain, with guardrails​

Copilot’s memory is intent‑driven: the assistant stores facts and preferences only when the user gives explicit or clear direction to remember something (e.g., “Remember that I prefer Python examples”). Users will be able to edit, update, or delete memories, and tenant‑level admin controls can limit memory in managed accounts. This model reduces accidental retention but depends on accurate intent detection and a discoverable memory UI to give users real control. Early rollout reports show the memory UI is still arriving in stages, so discoverability and reliability vary by account during the staged deployment.

Connectors: power and risk​

By linking cloud accounts — OneDrive, Outlook, Gmail, Google Drive, and Google Calendar — Copilot can read documents, emails, and events to ground answers and perform Deep Research across multiple sources. This is a huge productivity win for multi‑document tasks, but it substantially expands Copilot’s access surface. Microsoft emphasizes opt‑in consent, folder‑scoped linking where possible, and admin governance, yet OAuth permission scopes and staged rollout particulars mean users and admins must verify what’s being shared before enabling connectors. Some consumer connector availability remains fluid during rollout.

Safety, provenance, and Copilot for Health​

Grounding health answers in trusted sources​

Copilot for Health is explicitly designed to reduce hallucination risk in medical queries by grounding responses in vetted sources (Microsoft cites partnerships such as Harvard Health). The experience includes a Find‑Care flow to help users locate clinicians by specialty, location, and language, and it emphasizes that AI outputs should be a starting point for professional care rather than definitive medical advice. At launch the health features are U.S.‑first; users outside the initial markets will see phased availability.

Real Talk: calibrated pushback​

One of the more philosophically significant moves is Real Talk, an optional conversation style that encourages Copilot to challenge user assumptions when statements are inaccurate, risky, or potentially self‑harmful. This is a deliberate design to avoid the “yes‑man” problem of many earlier assistants, encouraging safer, more critical interactions. Real Talk introduces complexity: argumentation requires better auditable reasoning, explicit provenance, and careful escalation rules to avoid adversarial outcomes.

Copilot in Edge and Windows: agentic and resumable workflows​

Edge: tab reasoning, Actions, and Journeys​

Copilot Mode in Edge evolves the browser into an AI browser that reasons across open tabs, summarizes and compares information, and — with permission — performs actions such as booking or filling forms. The Actions capability is agentic: it can execute multi‑step tasks on the web, which accelerates workflows but raises new governance questions about delegated actions (payments, bookings). Journeys organizes past browsing into storyline‑like workspaces so users can resume tasks without retracing steps. These features are powerful but hinge on tight permission models and clear user confirmations.

Windows Copilot: “Hey Copilot” and PC as a conversational surface​

Windows Copilot turns each Windows 11 PC into a voice‑enabled AI PC with a wake word (“Hey Copilot”), Copilot Home quick access, and Copilot Vision for camera‑assisted guidance. It’s a consistent effort to make the OS itself an ambient partner, not merely a platform for apps. As with Edge, implementations will vary by device and build, and the wake‑word and vision interactions require careful local processing and privacy design to avoid surprises.

Under the hood: models, MAI family, and technical posture​

Microsoft’s release notes and product pages indicate the company is using a mix of in‑house models (the MAI family such as MAI‑Voice‑1, MAI‑1‑Preview, and MAI‑Vision‑1) and partner models where appropriate, with product integrations still ramping. This hybrid strategy aims to pair Microsoft’s integration depth with model innovation and to iterate quickly on voice, vision, and multimodal capabilities. Model choice matters: different models offer different tradeoffs in latency, cost, safety filtering, and grounding — and Microsoft has signaled product teams will continue to refine which models power which experiences.

Strengths: what Microsoft gets right​

  • Integration depth — native ties across Windows, Edge, Microsoft 365, and cloud connectors create a seamless experience for multi‑app workflows.
  • Human-centered design intent — opt‑in avatar, arguable Real Talk mode, and intent‑driven memory show a clear attempt to put people first rather than engagement metrics.
  • Productive agentic features — Actions, Journeys, and Deep Research can reduce friction in routine multi‑step tasks and research workflows.
  • Enterprise governance scaffolding — Purview/eDiscovery integration, tenant controls, and retention policies provide administrators with compliance levers not available in many competing consumer assistants.

Risks, unknowns, and practical caveats​

Rollout inconsistency and provisional features​

Multiple elements of the Fall Release are being staged via A/B tests and preview builds. Users should expect uneven availability and occasional UI placeholders; several reported behaviors (e.g., Clippy easter egg or precise participant caps) were observed in previews and are not fully documented in official release notes. Treat provisional findings as informative but subject to change.

Privacy surface expansion​

Connectors and memory expand the surface area where personal data can be processed. Even with opt‑in toggles, OAuth scopes, tenant admin visibility, and the discoverability of memory items mean that data governance decisions are now critical for both consumers and IT departments. Administrators should update retention and eDiscovery policies and audit connector scopes before broadly enabling these features in managed environments.

Hallucination and provenance​

Grounding in vetted sources (for example, Copilot for Health citing Harvard Health) reduces but doesn’t eliminate hallucinations. Any agentic action that results from Copilot recommendations should be verified by a human — especially health, legal, or financial decisions. The Real Talk mode embraces pushback, but that depends on transparent chains of reasoning and reliable citations.

Monetization and gating uncertainty​

Microsoft has not fully clarified which connectors or advanced features will remain free and which will be gated behind subscription tiers. Early reporting and industry precedent suggest some connectors or advanced Deep Research capabilities may require Microsoft 365 or paid Copilot tiers; assume variability until Microsoft publishes a consolidated consumer pricing page.

Practical guidance: how to pilot Copilot responsibly​

  • Review the Copilot profile and privacy controls immediately after enabling it; locate Manage Memory and Connected Apps toggles.
  • For sensitive use (health, legal, HR), treat Copilot outputs as drafts and require human verification before acting.
  • Admins: test connectors and Deep Research in a controlled sandbox tenant before broad rollout; verify Purview/eDiscovery policies cover Copilot interactions.
  • Limit OAuth scopes when linking third‑party storage (choose specific folders where supported) and revoke connectors you don’t need.
  • Pilot agentic Actions with monitoring and auditable logs; require explicit confirmations for bookings, payments, or any action with downstream consequences.

How this release fits the competitive landscape​

Microsoft’s Fall Release is both defensive and offensive: it closes functionality gaps relative to other assistants that already offer connectors and persistent workspaces, while pushing new bets — avatar‑led voice UX, group collaboration, and agentic browser capabilities — that differentiate Copilot. The emphasis on governance and tenant controls is a clear play to the enterprise market, while the social and education tooling (Mico, Study and Learn) aim to capture consumer and classroom mindshare. Overall, Microsoft’s platform advantage — controlling the OS, browser, productivity apps, and cloud identity — gives Copilot a strategic runway that competitors will find hard to match when integrations matter.

Verification and cross‑checks​

Key claims in Microsoft’s announcement were cross‑checked against major independent outlets and early preview reporting:
  • The presence of the Mico avatar and its optional nature are documented in Microsoft materials and reported by outlets such as The Verge and Windows Central.
  • Copilot Groups with shared invites and the up‑to‑32‑participant figure appear in Microsoft’s rollout notes and were reported by Reuters and Windows Central.
  • The health grounding feature and partnerships with established publishers (Harvard Health) are specifically referenced in Microsoft messaging and covered by Reuters and Associated Press.
  • Memory and connectors behavior, including intent‑driven memory and OneDrive/Google Drive connector plans, have been observed in staged builds and described in Microsoft’s documentation; rollout variability means some specifics remain provisional.
Where reporting diverged — for example, easter‑egg behaviors or exact numerical caps — the most cautious interpretation is that those observations come from preview builds and may change in official releases; readers should verify behavior on their accounts once the staged rollout reaches their region.

Final assessment​

The Copilot Fall Release is a milestone in Microsoft’s long arc to make AI companions genuinely useful across the PC lifecycle. It shows disciplined human‑centered design choices — opt‑in personality, intent‑driven memory, and a focus on collaborative flows — while delivering practical productivity gains in browsing, group work, and learning. At the same time, this shift raises new governance, privacy, and reliability challenges that organizations and careful consumers must address proactively.
Microsoft’s strategy makes sense: build the assistant where people already work, keep personalization useful but controllable, and make agentic features opt‑in and auditable. The success of this approach depends less on flashy avatars and more on clear UI for memory and connectors, robust provenance for sourced answers, and reliable, transparent controls for agentic actions. Done well, Copilot will save time and deepen human connection; done poorly, it risks becoming another source of confusion and accidental data exposure.
The responsible path forward is pragmatic adoption: pilot with strict controls, insist on provenance and audit logs, and treat Copilot outputs as accelerants for human work — not substitutes. Microsoft’s Copilot Fall Release is a significant step toward that vision; the execution and the guardrails will determine whether it becomes a trusted companion for daily computing or a new set of management headaches that IT teams must mitigate.

Conclusion
Microsoft’s Copilot Fall Release reframes the assistant as a social, persistent, and action‑capable companion that seeks to give users back time and deepen human potential rather than capture attention. With Mico, Groups, memory, connectors, health grounding, and agentic browser features, the company has bundled a broad new set of capabilities into the Copilot family. The features are compelling and well aligned with productivity needs, but they require deliberate, cautious deployment: clear memory discoverability, scoped connector permissions, auditable agent actions, and human verification for high‑stakes domains. For users and IT leaders alike, the immediate next steps are straightforward — pilot, verify, and govern — while watching for Microsoft’s continuing documentation and the gradual global rollout that will determine how these capabilities behave at scale.
Source: Microsoft Human-centered AI | Microsoft Copilot Blog