• Thread Author
I opened Paint and a small banner asked me to join “Windows AI Labs” — an opt‑in program that, according to the on‑screen card and an attached programme agreement, will let selected users test experimental AI features inside Microsoft Paint before those features are broadly released.

A floating software window displaying AI features and cute icons in a modern office.Overview​

Microsoft appears to be quietly piloting a new testing channel called Windows AI Labs that lets invited Windows users opt into early, unfinished AI experiences inside an inbox app (Paint for the moment). The experience described in the invite is straightforward: a top‑right pop‑up inside Paint reads “Try experimental AI features in Paint: Sign up for Windows AI Labs programme for Paint in Settings,” with an immediate “Try now” button that opens a Settings card and a program agreement. The agreement, as reported in the early alert, frames the initiative as an ongoing evaluation of pre‑release Paint features and warns participants the features are preview‑quality and may never ship.
This feels consistent with Microsoft’s broader strategy of embedding generative and assistive AI into everyday Windows apps — a push that has already manifested as Copilot‑branded experiences and on‑device model support on Copilot+ hardware. Microsoft has added advanced AI tools to Paint in recent releases, including generative erase, an Image Creator/Cocreator flow, sticker generation, and an integrated Copilot hub in the app’s toolbar. Official documentation shows Paint’s Copilot features are already gated by device capability and account sign‑in requirements. (support.microsoft.com)

Background: why this matters now​

Microsoft has been steadily moving AI into core Windows utilities: Notepad, Snipping Tool, Photos, and Paint have each received generative or assistive capabilities in staged Insider rollouts over the past year. The aim is twofold: make everyday tasks easier for mainstream users, and use these inbox apps as testbeds to refine AI UX, safety filtering, and monetization flows (for example, Microsoft’s credit/subscription distinctions for certain image generation scenarios). These changes are not incidental; they are part of a larger Windows AI roadmap that includes the Copilot Runtime/Windows AI Foundry efforts and a strategy to support both cloud and local model execution depending on device hardware. (blogs.windows.com)
At the same time, Microsoft has long used staged, server‑side “flighting” systems to turn features on and off for subsets of users — the Windows Insider Blog regularly describes enablement that begins at small percentages and expands. The Paint pop‑up, which reportedly appeared without an app update being installed, looks like one of these server‑side flips: Microsoft enabled a UI prompt that points to a program sign‑up flow even though the backend “Labs” service isn’t live for most users yet. That kind of staged rollout is a known technique inside Microsoft’s feature‑release toolkit. (blogs.windows.com)

What the Windows AI Labs signal contains​

The visible experience (what users saw)​

  • A pop‑up inside Paint inviting selected users to “Sign up for Windows AI Labs programme for Paint in Settings.”
  • A Settings card titled “Try experimental AI features in Paint” with a Sign up button and a “Not interested” option.
  • A programme agreement document that frames participation as testing pre‑release features, warns that features are not final, and notes Microsoft may require Paint to be updated to access future Labs features.

The state of the backend​

  • Reports indicate the backend for Windows AI Labs is not yet active, meaning clicking Sign up does not currently enable functional AI features; the prompt appears to have been rolled out prematurely in some production environments as a server‑side change rather than as part of a Store update. This suggests Microsoft is progressively notifying accounts before the service is ready.

The scope (initial and potential)​

  • Initially limited to Microsoft Paint in this rollout, but the document language implies the Windows AI Labs model could extend to other inbox apps over time. That aligns with Microsoft’s pattern of testing new AI capabilities in one app before scaling. The Microsoft documentation and Insider posts show Paint is already a central canvas for AI experiments — generative fill, erase, sticker creation, and a Copilot hub have been integrated and refined through staged rollouts. (windowslatest.com)

How Windows AI Labs fits with Microsoft’s existing AI plumbing​

Microsoft’s strategy is to place AI capabilities in places where they reach everyday users, but also to control distribution and data collection tightly. Consider three corroborating threads:
  • Microsoft’s Copilot/Copilot+ model: Microsoft has designated certain experiences and on‑device model execution for Copilot+ hardware (NPU‑equipped devices) while keeping hybrid cloud filtering and safety services in Azure. Paint’s advanced features have been gated by hardware and sign‑in requirements in official documentation. (support.microsoft.com)
  • Flighting and feature flags: Microsoft manages experimental rollouts and Insider testing via controlled enablement and server‑side flights. The Windows Insider Blog documents staged enablements where features are toggled on for small cohorts before wider availability. The appearance of a sign‑up prompt sans functional backend is consistent with this staged approach. (blogs.windows.com)
  • Productization path: Inbox apps have already moved from experimental Steam into broader releases (Cocreator, generative erase, sticker generator), indicating Microsoft’s playbook: iterate in Canary/Dev, gate by device/region/account, then scale via controlled flights and optional opt‑ins. Windows AI Labs looks like an attempt to formalize the opt‑in testing layer for just‑in‑time experimentation. (windowslatest.com)

Why Microsoft might launch a formal “Labs” program​

  • Cleaner opt‑in mechanics: A dedicated program with a short agreement lets Microsoft invite users into rough, unreliable experiences while setting expectations about quality and privacy.
  • Structured feedback loop: Windows AI Labs could centralize feedback, telemetry, and moderation signals from early testers in a way that’s easier to act on than scattered Insider bug reports.
  • Legal/consent clarity: A program agreement helps formalize the collection of prompts, telemetry, and possible safety‑filtering behaviors tied to AI features — important for compliance and privacy disclosures.
  • Faster experimentation: Enabling small cohorts via server‑side flags and having an explicit opt‑in reduces the risk of surprise when Microsoft flips features that still depend on cloud services or new backend systems.
This model resembles other “Labs” programs across tech — early access to experimental features in a curated, opt‑in environment — and echoes the mechanics of Google’s Search Labs and various beta programs from other platforms. The difference here is Microsoft’s vast install base and the sensitivity of inbox apps that many users consider “core” to the OS.

Strengths of the Windows AI Labs approach​

  • User empowerment via opt‑in: By making AI experiments opt‑in, Microsoft respects users who prefer a stable, non‑experimental experience while still giving enthusiasts a safe place to test new tools.
  • Reduced friction for rollout: Server‑side invites allow Microsoft to coordinate account‑level enablement without needing immediate Store updates or device‑level changes.
  • Safer release cadence: The programme agreement language and preview warnings set proper expectations and reduce the chance that early glitches will be mistaken for finished features.
  • Testbed for safety and moderation: Running moderation and safety checks in the cloud while processing image generation locally (or hybrid) is a defensible approach that banks on cloud filters to prevent obvious abuses while preserving device performance. Microsoft’s product pages for Copilot features already emphasize a hybrid safety model. (support.microsoft.com)

Risks and concerns to watch​

  • Transparency and discoverability: Rolling out an invite card server‑side without a public announcement risks confusing users and fragments the message. Microsoft should clearly communicate what Windows AI Labs is and who’s eligible rather than relying on serendipitous pop‑ups.
  • Privacy and prompt handling: Programme agreements that ask participants to share prompts and telemetry are standard for testing, but they must be precise about retention, sharing, and how prompts may be used for model training. Current public Paint pages explain some telemetry practices (device and user identifiers, prompting for abuse prevention) but do not yet reference a “Windows AI Labs” program by name. Any expanded program must make prompt handling explicit. (support.microsoft.com)
  • Staged enablement friction: If Microsoft repeatedly shows “try now” walls that lead to non‑functional backends (a premature pop‑up), users may grow annoyed, and trust could erode. Earlier cases of join‑the‑waitlist prompts in Paint drew community complaints; Microsoft will need to coordinate messaging better to avoid false expectations. (gizchina.com)
  • Quality variability: Experimental AI features are, by definition, rougher. Making them discoverable inside core apps without easy context or escape hatches may generate negative impressions if testers encounter broken or low‑quality outputs.
  • Enterprise and policy complexity: If Windows AI Labs expands beyond consumer contexts, IT administrators will expect clear controls and policy settings to opt employees out. Microsoft’s enterprise documentation and policy surfaces must keep pace. Flighting systems and group policy controls are established in other contexts, but inbox AI brings new governance demands. (blogs.windows.com)

Practical guidance for testers and IT administrators​

For hobbyists and early testers​

  • Use a secondary account or test profile if you want to experiment without exposing personal work to rough AI features.
  • Read the programme agreement fully before signing up — it should describe what Microsoft collects and how prompts or generated content are treated.
  • If the sign‑up appears but the feature doesn’t work, expect that the backend is not yet live; report your experience through Feedback Hub rather than attempting to force the feature. Microsoft relies on controlled telemetry during these early flights. (blogs.windows.com)

For IT administrators and power users​

  • Treat inbox AI features as services that may require account sign‑in and cloud connectivity. Verify whether your organization allows Microsoft account sign‑in on managed machines and adjust policies accordingly.
  • Monitor Group Policy and enterprise management channels for new blocking or allowlist controls for Windows AI Labs — these controls typically lag initial consumer test phases.
  • Use test devices for early adoption to evaluate privacy, data handling, and potential compliance issues before considering broader rollouts. Microsoft’s support docs for Copilot features already call out privacy and on‑device/cloud hybrid approaches you’ll want to audit. (support.microsoft.com)

What we verified (and what remains unverified)​

Verified:
  • Microsoft has integrated multiple AI features into Paint (Copilot hub, generative erase, sticker generator, object select), and official support pages describe device gating, sign‑in requirements, and hybrid cloud filtering. (support.microsoft.com)
  • Microsoft runs controlled staged rollouts and server‑side feature flights for Insiders and broader groups — the Windows Insider Blog describes the mechanism. (blogs.windows.com)
Unverified or incomplete:
  • The specific brand name “Windows AI Labs” does not appear to have an official, public presence on Microsoft’s main blogs or support pages at the time of writing. The pop‑up and programme agreement were reported in the early alert and are visible to some users, but there is no equivalent corporate announcement or documentation describing a program of that name on official Microsoft channels as of this article’s publication. That absence suggests the initiative is either being piloted internally or in a very controlled manner, or the naming/branding may change before any broad release. The lack of an official Microsoft page covering “Windows AI Labs” means the program’s scope, retention policies, and long‑term plans remain unconfirmed. (blogs.windows.com)

How this compares to other “Labs” programs​

Google and other major platforms have experimented with small, invite‑only labs experiences (for example, Google’s Search Labs), offering early access to features while acknowledging some may never ship. Microsoft appears to be adopting a similar pattern but within the Windows ecosystem: an opt‑in stream that keeps early users insulated from mainstream expectations while allowing engineering teams to gather rapid feedback. Compared to cloud‑only web products, doing this inside an operating system and inbox apps has additional technical and governance constraints — local model execution, device capability checks, offline behavior, and enterprise policy become relevant in ways that web labs don’t face. (windowscentral.com)

Bottom line​

Windows AI Labs — as it surfaced inside Paint — is a clear signal that Microsoft wants to formalize early access to experimental AI features and keep testing tightly controlled. The mechanics we observed (server‑side prompt, an opt‑in Settings card, and a program agreement) reflect a pragmatic approach: invite a small set of users, warn them, and gradually expand once the backend and moderation systems prove reliable. That fits squarely within Microsoft’s established pattern of staged rollouts, Copilot integration, and a hybrid local/cloud safety model for AI experiences. (support.microsoft.com)
However, the brand name itself and the program’s public policy surface remain not fully documented in Microsoft’s official channels at this time. Users who see the pop‑up should read the program agreement carefully, test on non‑critical systems, and expect that features offered through Windows AI Labs will be preview quality: promising in capability, but variable in polish and availability.

What to watch next​

  • A formal Microsoft announcement or a Windows Insider Blog post describing Windows AI Labs and its governance model.
  • Administrative controls for organizations to opt into or block Windows AI Labs features in enterprise environments.
  • Whether Windows AI Labs expands beyond Paint to Snipping Tool, Notepad, Photos, and other inbox apps — and how Microsoft discloses prompt retention, training use, and moderation processes for those services.
  • Any changes to the naming, scope, or availability of the program after Microsoft activates the backend for enrolled users.
Windows AI Labs, whether it becomes the official name or not, is an important development: it shows Microsoft is moving toward a structured, explicit sandbox for in‑OS AI experimentation. For users and admins alike, the wise course is cautious curiosity: test deliberately, read the terms, and treat early AI features as powerful helpers that still require human oversight.

Source: WindowsLatest Windows 11 is getting "Windows AI Labs" for early access to Microsoft's AI features
 

Microsoft has quietly begun rolling out an opt‑in testing channel called Windows AI Labs, a program that invites selected users to try experimental AI features inside built‑in Windows 11 apps — first observed in Microsoft Paint — and which appears designed to gather structured feedback and telemetry while keeping unfinished features behind an explicit consent wall. Early appearances of the program show an in‑app prompt that opens a Settings card and a programme agreement, but clicking the signup currently returns an error for many users because the backend service is not yet active, indicating a staged, server‑side rollout rather than a full public launch.

Windows 11 desktop showing an AI features prompt and program agreement popup on a blue abstract wallpaper.Background​

Microsoft has been embedding AI into core Windows utilities across the last year, turning traditionally simple inbox apps like Paint, Notepad, Snipping Tool, and Photos into active testbeds for generative and assistive capabilities. These experiments have been surfaced through the Windows Insider channels and gradual server‑side feature flights, but Windows AI Labs appears to be a distinct opt‑in program intended specifically for experimental AI features that are not ready for broad release. The noticeable difference is the explicit program agreement and a clear user opt‑in separate from the usual Insider Preview mechanics.
Microsoft’s broader AI plan for Windows uses a hybrid architecture: cloud services in Azure coupled with on‑device model execution on certain high‑end machines Microsoft brands as Copilot+ PCs. That hardware tier — devices with certified NPUs — is prioritized for low‑latency, privacy‑sensitive on‑device AI features. The new Labs program slots into that architecture as a controlled way to test both cloud and on‑device experiments with consenting users.

What showed up in Paint: the visible experience​

The prompt and user flow​

Selected Paint users reported a small banner inside the app prompting them to “Try experimental AI features in Paint” with a Sign up button that opens a Settings card. The Settings card repeats the offer and links to a programme agreement that frames participation as evaluating pre‑release versions of Microsoft Paint. The agreement warns participants that features are preview quality and may never ship. Attempts to sign up have resulted in an error because the backend services required to enable Labs for most accounts are not yet active, suggesting Microsoft started the UI rollout ahead of infrastructure activation.

What the agreement emphasizes​

The programme agreement is explicit: participants are testing pre‑release features, will provide feedback, and should expect instability and change. The agreement can also specify that Microsoft may require app updates to access future Labs features, creating a legal and operational boundary between experimental functionality and production features. The presence of an explicit agreement is an important design decision — it clarifies consent, scope, and expectations.

How Windows AI Labs fits into Microsoft’s release model​

Server‑side feature flags and controlled flights​

Microsoft has long used server‑side flights and account‑level enablement to stage features to small cohorts before wider rollouts. The Paint pop‑up appearing without an app update is consistent with that model: the UI was toggled on via server‑side flags for a subset of accounts while the Labs backend remains staged for later activation. This approach reduces the friction of deploying an opt‑in test but can create confusing user experiences if the UI appears before the necessary services are online.

A specialized opt‑in for AI experiments​

Windows AI Labs appears to formalize what Microsoft has been doing ad hoc across Copilot and inbox apps. Instead of scattering requests for feedback across Insiders, Inbox apps and forum posts, a central Labs program lets Microsoft:
  • Present one consenting legal agreement for AI testing.
  • Consolidate telemetry and user feedback from focused participants.
  • Reduce risk by gating unstable features behind explicit opt‑in controls.
  • Rapidly iterate on model behavior, safety filters, and UX before graduation.
This Labs model mirrors other “Labs” programs in tech but is notable for its scale when applied to Windows’ default apps.

Technical context: Copilot+, NPUs, and on‑device models​

The Copilot+ hardware tier​

Microsoft’s high‑end AI experiences are increasingly tied to a Copilot+ PC definition: devices that include a certified NPU (neural processing unit) with substantial TOPS throughput (often cited as 40+ TOPS in Microsoft partner materials). Those NPUs enable local execution of compact but capable models for low‑latency tasks, reducing cloud roundtrips and allowing certain features to run with minimized cloud exposure. This creates a hardware‑tiered rollout for AI features: Copilot+ devices get on‑device, high‑performance AI earlier while non‑Copilot machines may rely more on cloud services for comparable features.

Models: Mu and the Phi family​

Microsoft’s stack includes small, optimized models intended for on‑device tasks. For system control and short prompts, Microsoft has developed compact models (internally referenced as Mu in public engineering discussions) that are engineered to run efficiently on NPUs. For more capable multimodal reasoning, Microsoft relies on the Phi family (Phi‑4 variants), which can be executed on either more powerful on‑device silicon or in the cloud. These distinctions matter because which model a Labs experiment uses will determine whether Copilot+ hardware is required. At present, it is not publicly clear which specific Labs features will require Copilot+ hardware.

What Windows AI Labs could test — practical examples​

Microsoft has already introduced a range of generative features into inbox apps in staged rollouts. Windows AI Labs could be used to trial refinements and brand‑new experiments such as:
  • Enhanced generative tools in Paint: object‑aware fill, multimodal sticker creation, layered project files, and an integrated Copilot sidebar for composition assistance.
  • On‑device text generation in Notepad: summarize, rewrite, or generate content locally on Copilot+ machines to improve latency and privacy.
  • New content moderation, safety filters, or telemetry variations to measure real‑world behavior and false positive/negative rates.
  • Integration tests that combine Copilot features across apps (e.g., using Notepad outputs to seed Paint collages).
These hypotheses reflect Microsoft’s existing pattern of adding one or two marquee AI features to an app, then iterating through Insiders and staged flights. Windows AI Labs gives Microsoft a channel to surface rougher experiments to consenting users without confusing the broader population.

Strengths and benefits of the Windows AI Labs approach​

  • Clear consent and expectation management: The programme agreement sets explicit expectations that features are pre‑release, which is good practice for transparency and legal clarity.
  • Faster experimentation cycle: Server‑side opt‑ins let Microsoft test features in the wild and collect rapidly actionable telemetry without needing full Store updates.
  • Focused feedback loop: Centralizing consenting testers makes telemetry and qualitative feedback easier to analyze and act upon, potentially accelerating product maturation.
  • Risk containment: Opt‑in testing reduces the chance of exposing unstable AI behavior to users who do not want or expect it, keeping the mainstream experience stable.
  • Alignment with hybrid AI strategy: Labs can help validate when a feature should run on‑device (for privacy/latency) versus in the cloud (for capability/scale), helping Microsoft fine‑tune hardware tier boundaries.

Risks, unknowns, and areas to watch​

Despite the potential benefits, the Windows AI Labs approach raises immediate questions and risks that merit attention.

1. Privacy and telemetry scope​

The programme agreement and consent flow are the right first steps, but significant concerns remain about what exactly will be collected, how prompts and generated content are stored, how long data is retained, and whether private on‑device processing can be converted into cloud processing under the hood for safety checks. The current documentation signals Microsoft’s intention to collect feedback and telemetry, but granular telemetry details (what fields, redaction practices, and retention schedules) remain unclear in early reports. Users and IT admins should demand specific DLP and retention disclosures before enabling Labs on managed devices.

2. Confusing staged rollouts​

An interface that appears before the backend is active — as happened with Paint signups returning an error — can frustrate and erode trust if users see promises they cannot immediately access. Microsoft will need to synchronize UI enablement with backend readiness more closely as Labs expands.

3. Hardware segmentation and fairness​

Tying the best experiences to Copilot+ NPUs creates a stratified Windows experience. While this is technically defensible (NPUs enable local model inference), it risks fragmenting user expectations: features tested in Labs might be limited to users with high‑end hardware, skewing feedback and masking issues that would appear on mainstream devices. Microsoft must be transparent about hardware requirements for each Labs test.

4. Safety, content moderation, and hallucination​

Experimental generative AI features often surface safety gaps — hallucinations, biased outputs, or unsafe creative content. A structured Labs program must pair feature experiments with rigorous safety testing, human review pipelines for escalations, and clear user reporting mechanisms. The programme agreement is a start but operational safeguards and accountability must be visible to build trust.

5. Enterprise and regulatory compliance​

Enterprises will need to know whether Labs participation will expose corporate data (screenshots, document fragments, prompts) to telemetry. The program’s consent language must be enterprise‑friendly, and Microsoft should provide admin controls to allow or block Labs participation on managed devices. Otherwise, IT teams may have to restrict access entirely, limiting Microsoft’s ability to test in real enterprise workflows.

What this means for everyday users and IT admins​

For consumers and enthusiasts​

  • Expect a new opt‑in channel for rough, early AI features that may be exciting but unreliable.
  • If privacy is a concern, review the programme agreement carefully and monitor what data the app requests to send.
  • If you enjoy testing and feeding back, Labs may give early access to features that later ship to all users.

For IT administrators and enterprises​

  • Treat Labs as a separate opt‑in program and evaluate it like any preview program.
  • Consider blocking or whitelisting the Labs enrollment flow through enterprise policies until clear DLP guidance is provided.
  • Ask Microsoft for clarity on telemetry, retention, on‑device vs cloud processing, and admin controls before permitting Labs participation on corporate devices.
  • Build a test plan that mirrors real workflows if you allow selected groups to participate; don’t rely solely on enthusiast feedback.

Likely productization path and rollout mechanics​

Based on Microsoft’s historical patterns and the evidence in early Labs appearances, the most probable path will be:
  • Small, account‑level server toggle that surfaces the Labs UI (already observed).
  • Backend activation for specific cohorts or regions, enabling the experimental features for consenting accounts.
  • Iterative refinement during a bounded test period with telemetry and active feedback solicitation.
  • Graduation (some features) into Insider channels or broad release if performance, safety, and utility metrics pass thresholds.
  • Ongoing gating of full capabilities by Copilot+ hardware where on‑device inference is required, and cloud fallbacks for non‑Copilot devices.
This path gives Microsoft flexibility to scale features sensibly while limiting exposure to regressions or safety incidents.

Concrete recommendations for Microsoft and for users​

Recommendations for Microsoft​

  • Publish a clear Labs privacy and telemetry specification (fields collected, redaction, retention, and human review triggers).
  • Synchronize UI enablement with backend readiness to avoid early prompts that lead to errors.
  • Provide enterprise policy controls to opt out / whitelist Labs enrollment in managed environments.
  • Label hardware requirements for each Labs experiment (e.g., “Requires Copilot+ NPU”) so testers know upfront whether their device will support a feature.
  • Offer an in‑app “report a problematic output” mechanism that routes safety issues to a rapid response team.

Recommendations for users and IT admins​

  • Read the programme agreement and privacy details before enabling Labs.
  • For enterprises: block Labs enrollment on corporate devices until detailed telemetry and compliance guidance are available.
  • For testers: provide concrete, reproducible feedback and include device telemetry when requested — that data helps Microsoft reproduce and fix issues faster.
  • Backup important work when experimenting with preview AI features; pre‑release features can alter or corrupt in‑app documents.

Verification and cross‑checking of key claims​

Key claims in this analysis are corroborated across multiple independent traces of the early rollout:
  • The Paint‑embedded signup prompt and programme agreement are documented in early reports and internal excerpts indicating the UI and wording shown to users.
  • Attempts to sign up returning errors are consistent with server‑side feature flags being enabled without backend activation. The pattern matches Microsoft’s historical flighting behavior.
  • The Copilot+/NPU hardware tier and on‑device model strategy are referenced in multiple technical summaries about Microsoft’s small models (Mu) and Phi family, and in platform documentation about device requirements for premium AI features. However, the precise TOPS threshold, partner device lists, and exact feature‑to‑hardware mappings continue to evolve and should be verified against Microsoft’s published partner guidance when making procurement decisions.
Where specifics remain unverified — such as the definitive list of features that will appear in Labs or a firm public launch date — this analysis flags those items as not yet determinable and recommends awaiting formal Microsoft documentation before taking irreversible actions.

Conclusion​

Windows AI Labs represents a logical next step in Microsoft’s AI rollout strategy: a formalized, opt‑in program where consenting users can test pre‑release AI features in familiar inbox apps such as Paint. The early rollout demonstrates Microsoft’s attempt to balance rapid experimentation with user consent and legal clarity via a programme agreement, but practical concerns remain — notably telemetry transparency, backend‑UI synchronization, and hardware‑driven feature fragmentation.
For enthusiasts, Labs promises an early window into experimental creativity and productivity tools. For IT administrators and privacy‑minded users, Labs underscores the importance of careful evaluation, clear DLP guidance, and enterprise controls before enabling pre‑release AI features on production devices.
Microsoft’s staged rollout behavior — surfacing UI before backend activation — indicates the company is moving quickly; the coming weeks will show whether Windows AI Labs can mature into a constructive mechanism for safe, transparent AI experimentation at Windows scale.

Source: Windows Central It looks like Microsoft is about to launch a new "Windows AI Labs" program for testing experimental AI features in Windows 11 apps
 

Back
Top