• Thread Author
Microsoft’s latest Canary‑channel experiment pushes intelligence deeper into the Windows shell: a new AI actions submenu in File Explorer lets you right‑click images to run Bing Visual Search, blur or remove backgrounds, and erase objects — all without opening a full editor. This context‑aware shortcut set is appearing in Insider previews reported around Build 27938, and it signals Microsoft’s intent to make micro‑edits and visual lookup native file‑management actions rather than tasks that always require launching separate apps. (theverge.com)

Software UI displaying AI actions with options to remove or blur background over a scenic wallpaper.Background​

Why File Explorer matters now​

File Explorer has long been the nerve center for Windows users: the place where files are discovered, organized, and dispatched to apps. Microsoft’s design imperative over the last two years has been to bring more capability into that surface — from tabbed browsing to a Home view and now to actionable AI affordances. Embedding AI at the file level reduces context switches and makes small, repetitive jobs faster. Several recent previews have shown Microsoft experimenting with placing assistive features (Copilot, Click‑to‑Do, and quick actions) directly where users interact with files. (theverge.com)

Where this fits in Microsoft’s AI strategy​

The move is consistent with Microsoft’s broader OS strategy: integrate AI into core experiences while providing visibility and controls for generative workflows. Microsoft has been shipping Copilot integrations and on‑device model surfaces (Copilot+ hardware) and is now testing discoverability patterns that bring those models to one‑click file operations. That strategy aims to shift routine editing and lookup tasks from a multi‑app workflow to immediate, contextual actions inside the shell.

What’s in this Canary preview (what you’ll see)​

The AI actions menu — at a glance​

When the feature is visible on a device running the relevant Insider build, a right‑click on a supported image will expose an AI actions submenu containing options such as:
  • Bing Visual Search — use the image as the search input to find visually similar items, shopping links, landmarks, or other context. (windowscentral.com)
  • Blur Background — launches the Photos app to automatically detect the subject and apply a background blur with intensity and brush controls. (blogs.windows.com)
  • Erase Objects — invokes Photos’ generative erase capabilities to remove unwanted elements from an image. (laptopmag.com)
  • Remove Background — routes the image to Paint’s automatic background removal pipeline for a quick subject cutout. (windowscentral.com)
Supported image formats at first are common raster types: .jpg, .jpeg, and .png. Microsoft frames these as early “showfloor” demos and may expand format support over time. (tech.yahoo.com)

A note on the build number and staged rollouts​

Community reporting has tied the experiment to Windows 11 Canary Preview Build 27938, but Canary flights are heavily server‑gated and subject to rapid changes. That means two Insiders on the same build may see different behaviors; official Flight Hub or Windows Insider posts should be checked for formal confirmation. Treat any specific build number in community coverage as reported until it appears in official build notes. (blogs.windows.com)

How the feature works (technical mechanics and plumbing)​

Shell hooks, app backends, and hybrid execution​

The AI actions entry is a shell hook — a context‑menu launcher that either:
  • Calls a first‑party app (Photos, Paint) with a scripted edit flow so the app opens with the operation already staged.
  • Or invokes Windows’ generative‑AI platform APIs to run a quick transformation or query and return a result canvas without a full app context.
Because the actions reuse existing app capabilities, the experience you see depends on the installed Photos/Paint app versions and the platform APIs available on your device. In some cases the processing can run locally on Copilot+ hardware; in others it will fall back to cloud endpoints. Microsoft hasn’t publicly published a comprehensive decision tree that tells users which operations run locally versus in the cloud, which is a key transparency gap at the moment.

Copilot+ hardware gating and licensing​

Microsoft has been clear elsewhere that certain on‑device model capabilities are tied to Copilot+ certified hardware and, for some document‑level AI features, to Microsoft 365/Copilot licensing for commercial tenants first. Expect staged availability, with some features appearing only on eligible machines or for organizations with specific licensing. Consumer rollout may follow at a later date. (tech.yahoo.com)

Practical uses and immediate productivity wins​

  • Rapid cleanup for social or internal images: remove distractions, blur backgrounds, or produce thumbnails without opening a heavyweight editor. (laptopmag.com)
  • Quick visual research: use Bing Visual Search on a screenshot to find source pages, products, or related imagery in one step. (windowscentral.com)
  • On‑the‑fly privacy scrubs: erase license plates or other sensitive background items before sharing screenshots or photos.
These micro‑optimizations are not about replacing Photoshop; they are about handling small, common edits faster and keeping users “in flow” inside File Explorer. That can save seconds per task that add up over a day.

Step‑by‑step: how to try AI actions today (Insider preview)​

  • Join the Windows Insider Program and opt into the Canary (or Dev/Beta as appropriate) channel. (blogs.windows.com)
  • Update to the latest preview build available to your ring and confirm Windows Update shows the new flight. Because rollouts are staged, features may not appear immediately.
  • Right‑click a .jpg or .png in File Explorer and look for AI actions in the context menu. If present, open the submenu and choose the action you want. (windowscentral.com)
  • (Advanced / riskier) Power users have used tools such as ViveTool to toggle in‑development features, but that can destabilize the system and is not supported for production machines. Use this method only on test devices. (windowsforum.com)

Privacy, security, and governance — the tradeoffs​

What gets uploaded (and what might not)​

Bing Visual Search and Copilot flows may upload image payloads to Microsoft services for analysis. For many photo edits invoked through Photos or Paint the processing may happen locally, but Microsoft’s hybrid model means networked processing is possible. Microsoft has started adding visibility surfaces in Settings to show generative AI usage, but the exact data retention and telemetry policies for each action are not fully documented in the preview notes. Users should assume that cloud components may receive image data unless a local‑only processing guarantee is specified. (tech.yahoo.com)

Administrative controls and enterprise implications​

The preview introduces a Text and image generation (or Generative AI) view under Privacy & security that lists apps which recently used Windows‑provided generative models and offers per‑app toggles. This is a useful first step for IT governance, but administrators will need more granular controls — per‑action blocking, network restrictions, and audit logging — for enterprise deployments. Expect Group Policy and MDM hooks to evolve as these features mature. (blogs.windows.com)

Attack surface and data exfiltration risks​

Any feature that makes it easier to send file contents off‑device increases the potential for exfiltration if a machine is compromised. Action flows that automatically upload images to cloud endpoints could be abused by malware or a malicious insider to leak data. Organizations should inventory which devices have AI features enabled and apply least‑privilege, strict network egress rules, and behavioral monitoring to mitigate this risk.

UX tradeoffs: discoverability vs. clutter​

Adding AI actions to the right‑click menu improves discoverability for casual users but risks inflating a context menu many power users rely on for fast keyboard and mouse workflows. Microsoft will need to balance discoverability with personalization controls — for example, exposing a simple toggle to hide AI entries or letting users pin only the actions they use most. Early feedback across the Insider community highlights this tension: convenience for many, irritation for some.

Verification of key claims (what’s confirmed and what remains uncertain)​

  • Confirmed: Microsoft is testing AI actions in File Explorer that include Bing Visual Search, Blur Background, Erase Objects, and Remove Background; these appear as a right‑click submenu in recent Insider previews. Multiple mainstream outlets and Microsoft’s own Insider notes describe these exact operations. (blogs.windows.com, theverge.com)
  • Confirmed: Initial image format support is limited to JPG/JPEG and PNG. (laptopmag.com)
  • Confirmed: Microsoft plans similar AI actions for Microsoft 365 files (summarize, create FAQ), but initial availability for those document actions will be limited to commercial Microsoft 365 tenants with Copilot licensing before broader consumer availability. (tech.yahoo.com, windowscentral.com)
  • Partially confirmed / caution: The community has associated these experiments with Build 27938, but Canary channel rollouts are server‑side gated and official Flight Hub entries may lag; treat build‑number claims as provisional until Microsoft posts formal release notes.
  • Unverified detail: The exact split between local versus cloud execution for each action is not fully documented in public preview notes; Microsoft’s hybrid approach suggests device hardware and tenant licensing will influence where processing occurs, but specifics for each operation should be considered tentative. Flag: awaiting more explicit Microsoft documentation.

Recommendations for power users and IT teams​

For power users and creators​

  • Test features on a non‑critical machine or VM to avoid instability from Canary builds.
  • Keep Photos and Paint updated via the Store; the Explorer shortcuts delegate to those apps, so their versions affect what you’ll see. (windowscentral.com)
  • If you’re privacy‑conscious, avoid running Visual Search on images containing sensitive information until Microsoft clarifies data handling for each action.

For IT and security teams​

  • Inventory Insider devices and flag machines enrolled in Canary or Dev — don’t mix test and production workloads.
  • Evaluate and test new Settings surfaces for generative AI and push MDM/Group Policy controls where available. (blogs.windows.com)
  • Enforce network egress policies to restrict unintended cloud uploads and monitor for unexpected upload patterns from the Photos/Paint processes.
  • Update acceptable‑use policies and staff training to cover new context‑menu options that can alter or upload content. Documentation and change logs are vital.

Broader implications and what to watch next​

  • Expect Microsoft to expand action sets (more image operations, additional file formats, and document summarization) as feedback arrives from Insiders. Multiple outlets have reported a roadmap that includes AI actions for Office files and deeper Copilot integration. (windowscentral.com, laptopmag.com)
  • Watch for clearer documentation about local vs cloud execution. The privacy calculus for enterprises and privacy‑sensitive consumers hinges on whether image payloads leave the device. Microsoft building stronger transparency (data retention, model endpoints) will be a necessary step for wider acceptance.
  • Expect administrative controls and telemetry dashboards to evolve. The initial Settings view showing “recent activity” is promising, but enterprise adoption will demand fine‑grained policy options and audit capabilities.

Design critique: strengths and notable risks​

Strengths​

  • Reduced friction: Surfacing small edits at the file level shortens common workflows and reduces app switching, which improves productivity for rapid tasks.
  • Leverages existing apps: By delegating to Photos and Paint, Microsoft accelerates rollout while reusing proven editing backends rather than building entirely new pipelines. (windowscentral.com)
  • Visibility into generative AI use: Adding a Settings surface that lists recent generative operations is a responsible first step toward transparency.

Risks and caveats​

  • Privacy and data handling: Unclear boundaries between local and cloud processing are a real concern; image payloads may travel to Microsoft services for some operations. (tech.yahoo.com)
  • Enterprise governance: Initial controls are visibility‑focused; enterprises will demand per‑action policy enforcement and clearer audit trails before enabling this broadly.
  • Discoverability vs clutter: An overloaded context menu can anger power users who prefer lean, scriptable interfaces. Microsoft must add good customization options.

Final verdict: a pragmatic, cautious step forward​

Embedding AI actions into File Explorer is a pragmatic next step for Windows: it meets users where their files live and solves real, repetitive problems with minimal friction. For individuals who edit screenshots and social images frequently, these shortcuts will feel like a clear productivity win. For IT pros and privacy‑minded users, the preview raises important governance questions that Microsoft must answer before broad enterprise adoption — especially around data flows, retention, and per‑action controls.
Treat the current build as a preview of where Windows is heading rather than a final design. Test on non‑production devices, follow Microsoft’s official Insider blog for confirmed build notes, and monitor the evolving privacy and admin controls as the feature graduates from Canary to broader channels. (blogs.windows.com, theverge.com)

Microsoft’s experiment shows one important truth: as AI becomes a routine part of daily computing, the operating system — not only standalone apps — will become the surface for intelligent micro‑workflows. The next questions are not only what AI can do in File Explorer, but how clearly Microsoft will document where that work runs, who sees the data, and how administrators can control it.

Source: Windows Report Windows 11 Canary Preview Build 27938 Adds AI Actions in File Explorer
 

Microsoft’s latest Canary‑channel experiment stitches AI into one of Windows’ oldest workflows: right‑clicking files. The reported Windows 11 Insider Preview Build 27938 surfaces a new AI actions entry in File Explorer’s context menu that lets you run visual search, blur or remove backgrounds, and perform generative erases on common image formats without opening a separate editor — and it pairs that with a returning Notification Center clock that displays seconds and a new Settings surface that lists apps using the OS’s generative AI capabilities. (theverge.com)

Windows desktop with AI photo-editing tools overlaying a vibrant blue abstract wallpaper.Background​

Windows has spent the last two years moving AI from siloed apps into system surfaces: Copilot, OneDrive Copilot actions, and “Click to Do” experiments have shown Microsoft’s preference for surfacing intelligence where users already work. The File Explorer integration is a logical next step: the file system is the natural place to ask “do something” with a file. Multiple mainstream outlets and Insider hands‑on reports describe the same set of image‑focused quick actions and a roadmap to extend the concept to Microsoft 365 documents — though some document features will be gated by Copilot/Microsoft 365 licensing at first. (windowscentral.com) (theverge.com)
Microsoft’s official Flight Hub remains the authoritative source for build listings, and community reporting has tied these experiments to a Canary flight commonly reported as Build 27938. Flight Hub’s public list of active builds does not always reflect every community‑reported Canary number in real time, so treat a specific build label as community‑reported until confirmed. (learn.microsoft.com)

What’s in the Canary test: AI actions in File Explorer​

The new right‑click menu item​

When the feature is available for your device, right‑clicking a supported image will show an AI actions submenu. This menu groups several one‑click workflows that either launch an app with the edit staged, run a rapid model‑driven edit, or route the image to Bing Visual Search for web lookups. Reported image actions at introduction include:
  • Bing Visual Search — use the image itself as the search query to identify landmarks, plants, products, people or find visually similar images on the web. (windowscentral.com)
  • Blur Background — opens the Photos app with an automatic subject/background separation and tools to adjust blur intensity or refine areas with a brush.
  • Erase Objects — invokes a generative erase flow (Photos) to remove unwanted elements in the scene.
  • Remove Background — launches Paint’s automatic background removal to produce a one‑click subject cutout with no background.
Supported image file types at first are .jpg, .jpeg, and .png; RAW, PSD, TIFF and many professional formats are not reliably supported in these quick Explorer flows. The actions reuse existing first‑party apps (Photos, Paint) and Bing Visual Search rather than embedding entirely new editors into Explorer.

Document and OneDrive Copilot actions (roadmap)​

Beyond images, Microsoft has also been adding Copilot actions to OneDrive and File Explorer for Microsoft 365 files stored in OneDrive: Summarize, Ask a question, Create an FAQ, and Compare up to five files. Those Copilot actions operate on Office formats, PDFs and other document types and are explicitly tied to Microsoft 365/Copilot entitlements at launch. The current Canary image actions are separate but part of the same strategic move: bring AI micro‑workflows into shell surfaces and the OneDrive activity center.

How the features work (practical detail)​

  • The Explorer submenu acts as a launcher: it either passes the file reference to a target app (e.g., Photos/ Paint) with the edit preloaded, or it sends the image to Bing Visual Search and returns results directly.
  • Some operations may run locally on device hardware if your PC supports on‑device models (Copilot+ devices with NPUs), while others will fall back to cloud processing. The precise locality per action is not guaranteed in public docs and can vary by hardware and account entitlements. That hybrid model is important for privacy, latency and cost considerations.
  • For document Copilot features, processing currently occurs in Microsoft’s cloud and is restricted to files in OneDrive when invoked through the OneDrive UI or File Explorer OneDrive submenu. This keeps heavier document analysis centralized but requires Microsoft 365/Copilot licensing for some operations.

Enabling and trying it today​

  • Join the Windows Insider Program (Canary, Dev or Beta as appropriate) and update Windows to the latest preview flight.
  • If your build and device are eligible and the server‑side flag is enabled, right‑click a supported image in File Explorer to see AI actions.
  • For the returning Notification Center clock with seconds, go to Settings > Time & language > Date & time and toggle Show time in the Notification Center. If that option is not visible, the feature is still rolling out and may be gated; advanced users can enable internal flags with ViVeTool using feature IDs commonly circulated by the community. Use caution with ViVeTool. (pureinfotech.com)
Important caveat: Canary builds are experimental and heavily server‑gated; you may not see the same behavior as others even on the same build number. Flight Hub is the place to confirm official listings — community screenshots or posts may show a feature before Flight Hub or the Windows Insider Blog lists it. (learn.microsoft.com)

The new privacy & visibility controls​

Build 27938 (community reports) adds a Privacy & security → Text and image generation section in Settings that lists third‑party apps which have invoked Windows‑provided generative AI models recently. The intent is clear: provide visibility about which apps used OS‑provided generative capabilities and allow per‑app control over access. This is an early transparency step toward governance; enterprises and admins will need Group Policy/MDM controls and clearer audit logs for real manageability.

Reliability fixes and known issues in the Canary flight​

The Canary flight reporting this work also ships a range of typical fixes and known regressions. Community summaries and Insider notes list:
  • Fixes: “Reset this PC” reliability under Settings → System → Recovery; dark‑mode color issues for low‑space drive indicators in This PC; restored thumbnails for some video files with specific EXIF; WMI Registry scanning performance improvements; Task Manager freeze fixes; and resolution of some green‑screen errors (ntoskrnl.exe CRITICAL_PROCESS_DIED) reported in earlier Canary builds.
  • Known issues: installation rollbacks with error codes 0xC1900101‑0x20017 or 0xC1900101‑0x30017 for some Insiders; certain settings pages hanging when scanning temporary files; PIX on Windows unable to play GPU captures until a PIX update; audio device issues showing yellow exclamation in Device Manager; screen flicker in some browsers; and occasional UI inconsistencies tied to server‑side gating. Microsoft is actively investigating many of these.
These stability notes matter because Canary flights can contain low‑level platform changes that increase risk during daily use; Insiders should avoid installing Canary builds on mission‑critical machines without backups.

Why this matters: productivity, discoverability and the shell​

Bringing AI into File Explorer is a classic friction‑reduction play. These changes address three common pain points:
  • Speed for micro‑tasks: small edits and lookups that previously required launching an app (Photos, Paint, a browser) are now a right‑click away.
  • Context retention: users stay in File Explorer, preserving their mental flow while performing quick transformations or lookups.
  • Discoverability: exposing AI options in the context menu puts capabilities in front of users who might not open Copilot or Photos to discover them.
The practical productivity wins are clear: social media creators can remove a stray object faster; knowledge workers can run a visual lookup on a screenshot without opening a browser; and busy users can get quick summaries for OneDrive documents (where Copilot actions are available) without opening Word. Multiple outlets reporting on the feature emphasize this “in‑flow” benefit. (windowscentral.com)

Risks, caveats and unanswered questions​

While the feature is promising, several important issues deserve scrutiny.

1) Local vs cloud processing and privacy​

Microsoft’s hybrid approach — local on Copilot+ hardware vs cloud fallback — is not clearly documented per action. That matters for:
  • Data residency and exposure: if an image or document is uploaded to cloud endpoints for processing, that changes the privacy calculus for sensitive content.
  • Consent and transparency: the Settings surface listing recent generative AI activity is a start, but it’s not a full audit trail or proof of where the data was processed.
Until Microsoft publishes per‑action locality guarantees and enterprise‑grade logs, organizations should assume some actions may use cloud services and treat the functionality accordingly.

2) Enterprise manageability​

Visibility alone isn’t enough for IT:
  • Enterprises will need explicit Group Policy/MDM controls to enforce whether devices may use on‑device models or cloud endpoints for generative AI.
  • Admins will require robust telemetry and auditability for compliance scenarios.
The Settings page is a necessary transparency step, but further policy controls are essential for corporate deployment.

3) Feature gating, inconsistency and support burden​

Canary builds are experimental and heavily server‑gated. That leads to:
  • Inconsistent experiences across machines and rings.
  • Increased support complexity for organizations testing or piloting the features.
  • Potential regressions (installation rollbacks, driver issues) that can disrupt workflows.

4) Intellectual property and content integrity​

AI edits like generative erase or background removal may modify image content in non‑obvious ways. Users should be aware that automated edits can alter meaning or remove context; for legal, journalistic or forensic use cases, preserving the original file should be a default. Built‑in “save as” workflows and easy access to originals will be important safeguards.

Recommendations for power users and IT teams​

  • Treat Canary builds as experimental: do not deploy on production devices without backups and rollback plans. Use virtual machines or test devices for hands‑on evaluation.
  • Review the new Settings → Privacy & security → Text and image generation page regularly to understand which apps have invoked generative AI and to toggle app access where necessary.
  • For organizations, request explicit MDM/Group Policy controls from Microsoft for:
  • Disabling cloud processing for generative actions.
  • Controlling which users/groups have access to AI actions in File Explorer.
  • Enabling audit logging for generative AI invocations.
  • If you need the Notification Center seconds clock but don’t see the toggle, wait for the staged rollout or use ViVeTool only if you understand the risks; community guides document the feature flags and ViVeTool commands that have exposed the toggle in earlier Insider tests. Use ViVeTool with care. (pureinfotech.com)
  • Keep Photos and Paint updated from the Microsoft Store — the Explorer quick actions call those apps, and behavior depends on installed app versions.
  • For sensitive images, prefer manual edits in a controlled editor rather than automated cloud‑backed operations until locality and privacy guarantees are clarified.

How this fits into Microsoft’s larger AI strategy​

This shell‑level integration is consistent with Microsoft’s approach of making AI a first‑class OS capability rather than an isolated app feature. The pattern is:
  • Surface AI where users are already working (File Explorer, OneDrive, taskbar/Activity Center).
  • Provide visibility and initial controls in Settings.
  • Gate heavier document and enterprise features behind Copilot/Microsoft 365 entitlements and staged rollouts.
The immediate aim is to reduce friction for tiny tasks; the longer game is system‑level APIs and governance that let admins and developers build consistent experiences. Microsoft’s Flight Hub and Insider Blog remain the places to watch for when these experiments graduate to broader channels. (blogs.windows.com)

Final verdict — strengths and potential risks​

  • Strengths
  • Productivity gains: one‑click edits and visual search remove friction from many everyday tasks.
  • Discoverability: burying AI inside the familiar right‑click menu exposes capabilities to users who might not open Copilot or Photos on their own.
  • Platform consistency: reusing Photos, Paint and Bing Visual Search leverages existing investments rather than duplicating efforts. (windowscentral.com)
  • Risks
  • Privacy and data locality ambiguity: the mixed local/cloud model needs clearer documentation and explicit enterprise controls.
  • Canary instability and support overhead: early test builds can introduce regressions and inconsistent behavior across devices.
  • Licensing and fragmentation: Copilot‑gated document actions create different experiences for commercial and consumer users, complicating IT planning.

Practical checklist for testing Build 27938 (or equivalent preview)​

  • Run the build on a non‑critical test device with a fresh backup image.
  • Confirm whether AI actions appears when right‑clicking a .jpg/.png in File Explorer.
  • Test each image action (Visual Search, Blur Background, Erase Objects, Remove Background) and note whether processing appears local (fast, offline) or requires cloud connectivity.
  • Open Settings → Privacy & security → Text and image generation and observe recent app activity.
  • Check Settings → Time & language → Date & time for Show time in the Notification Center and test collapsed/expanded flyouts.
  • Monitor Device Manager, Task Manager, and Event Viewer for regressions (driver warnings, freezes, error codes).
  • If evaluating Copilot/OneDrive document actions, test with a Microsoft 365 account that has Copilot entitlements and files stored in OneDrive.

Conclusion​

Embedding AI into File Explorer marks a pragmatic and user‑centered step: Microsoft is not forcing a new app, it’s adding micro‑workflows where users already manage files. For everyday users and content creators, the result will be faster edits and easier visual lookups. For enterprises and privacy‑sensitive users, the move raises clear questions about where processing happens and how to govern access.
The reported Build 27938 is a Canary‑channel experiment — promising, incremental, and still subject to change. Treat it as a preview of what could become a standard part of Windows’ productivity fabric, but expect additional policy controls, clearer locality guarantees, and staged rollouts before this functionality lands broadly and safely in production channels. (windowscentral.com) (theverge.com)

Source: Cyber Press Microsoft Adds AI-Powered Actions to Windows File Explorer
 

Microsoft is testing a set of context‑aware AI editing tools that sit directly in File Explorer, letting you right‑click an image or document and invoke tasks such as Bing Visual Search, background blur/removal, generative object erase, and document summarization — and there are documented ways to try them today if you’re willing to run Insider builds or use community tools to unlock the hidden flags. (theverge.com) (windowscentral.com)

A translucent holographic display shows an AI photo-editing UI with a portrait image.Overview​

Microsoft’s newest experiments move small, repeatable editing tasks out of full editors and into the shell itself. The promise is simple: shave seconds (or minutes, cumulatively) from everyday workflows by turning File Explorer into a launchpad for AI actions. Early test builds expose an “AI actions” submenu when you right‑click supported files; options change depending on whether the file is an image or a document. The image actions tested so far include Bing Visual Search, Blur Background, Erase Objects, and Remove Background (which hooks into Paint), while document actions focus on summarization for supported Office and text formats. (theverge.com) (windowscentral.com)
These features are still experimental and being rolled out gradually to Insiders on different rings and channels; Microsoft is clearly using staged gating so not every Insider sees the same surface at the same time. The rollout strategy also ties into Microsoft’s broader Copilot/Copilot+ strategy — some higher‑end features are optimized for Copilot+ hardware with NPUs (neural processing units) and may remain gated on certain devices or subscriptions for a period. (windowscentral.com)

Background: why this matters​

File Explorer is where most users start and finish file work. By surfacing lightweight AI actions directly in the context menu, Microsoft reduces context switching: instead of opening an app, waiting for it to load, and hunting for a tool, a single right‑click could get you the edit and send you back to whatever you were doing.
  • Faster micro‑edits: Tasks like blurring a background or removing a photobomber are often tiny jobs that still cost time. AI Actions aim to make those instantaneous.
  • Discoverability: Embedding the options in the context menu makes these capabilities visible to mainstream users who may otherwise never open advanced editors.
  • Tighter integration: Actions hand off to native apps (Photos, Paint) and Copilot where necessary, so edits can leverage existing, familiar tooling while being orchestrated from the shell. (theverge.com)
At the same time, the rollout highlights a larger trend: Microsoft is moving generative and on‑device AI deeper into the operating system. That trend comes with tradeoffs — device hardware requirements, subscription gating for some productivity scenarios, and new privacy controls to manage generative AI access.

What’s in the early AI Actions menu​

The early builds expose four image actions and a document summarization capability. Behavior and availability vary by build and channel, but the tested set includes:
  • Bing Visual Search — Use an image as the search query to identify objects, products, plants, landmarks, and visually similar images on the web. This uses Bing’s visual search pipeline and is surfaced as a right‑click option. (theverge.com)
  • Blur Background — Launches the Photos app with the image preloaded and applies a portrait‑style background blur, with a slider to adjust intensity and brush tools for touch‑ups. (windowscentral.com)
  • Erase Objects (Generative Erase) — Invokes Photos’ generative inpainting to remove unwanted people or objects; the AI fills and blends the removed area. This is similar in concept to Google Photos’ Magic Eraser and Adobe’s tools.
  • Remove Background — Opens Paint’s background removal pipeline to produce a subject cutout with one click; useful when you need a quick transparent background or to paste the subject elsewhere. (theverge.com)
  • Summarize (documents) — For supported Office files and text documents, a context menu option will ask Copilot (or an on‑device summarization model when available) to create a quick summary or list of key points; initial availability is tied to Microsoft 365 Copilot licensing for some scenarios. (theverge.com)
Supported image formats reported in early testing are JPG, JPEG and PNG; Microsoft has indicated plans to expand support to additional file types in future flights. (theverge.com)

How Microsoft says the system works (and the settings you should know)​

Microsoft’s design routes AI action requests to the app best suited to complete the task (Photos or Paint), but the user experience begins in File Explorer. When you select “AI actions,” Windows decides which actions are relevant to the selected file type and displays them inline in the context menu.
Privacy and control are first‑class concerns in these flows. Windows now exposes a Text and image generation page under Settings > Privacy & security where administrators and users can:
  • See which apps have recently requested access to Windows’ generative AI capabilities.
  • Allow or deny apps the ability to use on‑device text and image generation.
  • Use Group Policy or registry options to control the feature centrally in enterprise environments. (downloadsource.net) (elevenforum.com)
This visibility is important because the AI actions use either local on‑device models (when available) or cloud services depending on device capability. If your device has a dedicated NPU and the feature runs locally, the image data need not leave the PC for processing — a privacy advantage in many cases. Microsoft documents and community threads confirm the Settings path and show the “recent activity” UI is present in current Preview builds. (elevenforum.com)

Who gets these features now (requirements and gating)​

Availability is intentionally limited early on — here’s the practical picture from testing and Microsoft’s staged rollouts:
  • Windows Insider Program: the quickest path is to join the Windows Insider Program (Dev or Beta channels) and install the latest preview builds where the experiments are active. Reported enabling builds include Dev builds in the 26200.xxx series and Canary builds such as 27938 for server‑gated tests. (windowscentral.com)
  • Copilot+ PC hardware: certain advanced AI features (and often the best performance) target Copilot+ PCs — machines with NPUs such as Snapdragon, Intel Core Ultra, or AMD Ryzen AI silicon. Some capabilities run locally on these NPUs for speed and privacy; non‑Copilot devices may see a subset or require cloud processing.
  • Microsoft 365 Copilot: document summarization for business Office files may require a Microsoft 365 Copilot subscription, particularly in early releases when commercial features are prioritized. Consumer availability is likely to follow. (theverge.com)
In short: Insiders on modern hardware see the most features earliest; other users will get features in phases.

How to try AI Actions today (official and community options)​

There are two realistic ways to try these features early: enroll in the Windows Insider Program and wait for the feature to hit your ring, or use community tooling to flip the underlying feature flags (advanced users only).

1. Official path (Insider Program)​

  • Open Settings > Windows Update > Windows Insider Program.
  • Link an account that’s registered for the Insider Program and choose the Dev or Beta channel as directed by current flight notes.
  • Update Windows to the build where AI Actions are enabled (Dev builds like 26200.5603 or later, or Canary builds where Microsoft is experimenting). Reboot and check File Explorer. (windowscentral.com)
This method is safest because you use Microsoft’s update channels and receive subsequent fixes from Microsoft directly.

2. Community method (ViveTool — advanced and risky)​

If the server‑gated rollout hasn’t given you the menu, community developers have identified feature‑flag IDs that can be toggled with ViveTool, an open‑source utility widely used by testers to enable hidden features in preview builds. The steps reported across community guides are:
  • Download ViveTool from its official repository and extract to a folder.
  • Open Command Prompt (Admin) and change directory to ViveTool’s folder.
  • Run the command:
    vivetool /enable /id:54792954,55345819,48433719
  • Restart your PC and check File Explorer for the AI actions entry. (guidingtech.com) (windowsforum.com)
Important caveats:
  • Using ViveTool can expose features that are not fully tested, and Microsoft may not support configurations that flip server‑gated flags manually.
  • Future cumulative updates may conflict with manually toggled flags and could cause instability.
  • Enabling flagged features bypasses Microsoft’s staged testing and places you in a “trial” category — use a spare device or create a full image backup before proceeding. (windowsforum.com)

Verified technical specifics and cross‑checks​

To avoid repeating early reporting errors, the following claims have been cross‑checked across independent sources:
  • The File Explorer “AI actions” menu appears in Canary/Dev Insider builds and ties to builds in the 26200+ family and server‑gated Canary builds like 27938. This has been reported by multiple outlets and community logs. (windowscentral.com)
  • Supported image formats at test time: JPG, JPEG, PNG. Several preview reports and screenshots show the actions enabled only for these raster formats. Expect broader format support later. (theverge.com) (windowscentral.com)
  • Privacy control location: Settings > Privacy & security > Text and image generation. This control and the recent activity view are present in current preview builds. Community posts and tutorials demonstrate the UI and registry/Group Policy options for administrators. (downloadsource.net) (elevenforum.com)
  • ViveTool IDs used in community enablement guides (54792954, 55345819, 48433719) appear in multiple community tutorials and technical how‑tos; those guides show the exact commands used to make the AI actions menu appear. These are not official Microsoft recommendations. (guidingtech.com) (windowsforum.com)
If Microsoft changes the feature flags or the UI, community IDs and steps can also change; treat the ViveTool route as a snapshot of the current community findings rather than a long‑term guarantee. (windowscentral.com)

Strengths: why this is a meaningful UX step​

  • Time savings for quick edits: The biggest practical win is eliminating the friction of small edits — a blur, a background removal, or a quick erase — where the full editor is overkill.
  • Lower barrier to creative tools: Casual users who never open Photos or Paint for edits will discover these tools where they already work — in the file manager.
  • On‑device processing potential: When running on Copilot+ NPUs, edits happen locally, reducing latency and improving privacy compared with cloud‑only solutions.
  • Consolidated privacy controls: The Text and image generation page centralizes permissions so admins and users can audit which apps access generative models. That transparency is a welcome design choice. (elevenforum.com)

Risks and limitations: what to watch out for​

  • Fragmentation by hardware and subscription: Because some features are optimized for Copilot+ hardware and some document tools are tied to Microsoft 365 Copilot, early availability will be uneven. That fragmentation risks confusing ordinary users who expect parity across devices.
  • Potential for instability when using ViveTool: Force‑enabling staged features bypasses Microsoft’s rollout safety nets and can introduce compatibility issues. It’s a power‑user tool, not a recommendation for general audiences. (windowsforum.com)
  • Privacy surface area: Although on‑device processing reduces cloud exposure, some actions still may use cloud models depending on device capability and Microsoft’s server gating. Users should check Settings > Privacy & security > Text and image generation to monitor which apps requested AI access. (downloadsource.net)
  • False expectations on quality: Generative erase and background removal are impressive for routine cases but can struggle on complex scenes — multiple overlapping subjects, patterned backgrounds, or reflections may produce artifacts that require a traditional editor to fix. Community tests show mixed results in challenging images.
  • Abuse and misinformation: Easier image editing lowers the bar for image manipulation in casual workflows; organizations and users should be mindful of the potential for misused imagery and introduce verification workflows where accuracy matters. This is a societal risk that accompanies all accessible image editing tools.

Practical advice and recommendations​

  • Back up your system before toggling deep preview flags or using ViveTool. Prefer a disk image so you can revert quickly if an update doesn’t behave as expected. (windowsforum.com)
  • Use the Settings > Privacy & security > Text and image generation page to audit which apps are calling generative models. Turn off permissions for apps you do not trust or do not use. (downloadsource.net)
  • If you prefer stability, wait for Microsoft’s official gradual rollout rather than forcing flags. Official channels will include incremental fixes and less risk of update breakage. (windowscentral.com)
  • For advanced editing needs (complex fills, high‑fidelity compositing), continue using full editors like Photoshop or Affinity Photo. The File Explorer actions are best described as “quick fixes,” not replacements for pro tools.
  • If you test the features, provide feedback via the Feedback Hub so Microsoft sees real‑world usage patterns and can prioritize improvements and broader availability. Insider feedback has historically shaped final implementations.

Looking ahead​

AI Actions are part of a much larger effort to make Windows 11 more contextually intelligent: Click‑to‑Do, Copilot Vision, Relight and Super Resolution in Photos, and other enhancements are following the same playbook of surfacing AI where users already work. Expect Microsoft to expand supported file types, refine the model quality, and broaden device compatibility in future updates once early feedback stabilizes performance and privacy posture. (theverge.com)
Two specific trends to watch:
  • Broader format and app support — document summarization for consumer Office users and expanded image format support will be important for adoption.
  • Local vs. cloud routing — how Microsoft decides which operations run on device versus in the cloud will shape the privacy and latency story for users worldwide.

Conclusion​

File Explorer’s new AI Actions are a practical experiment with clear, immediate benefits: quick edits and smarter context actions without the overhead of app switching. For testers and productivity tinkerers, the Insider path (or the community ViveTool method for advanced users) provides early access today. However, the rollout is intentionally cautious — gated by hardware, channels, and subscriptions — and there are real trade‑offs around fragmentation, privacy, and stability when enabling unfinished features.
The net: these right‑click AI shortcuts are a credible productivity booster when used with reasonable precautions. For most users, the recommended path is to monitor official Insiders and wait for Microsoft’s broad, supported rollout — but for those who want to tinker and accept the risks, the early unlock methods are already documented and in active use by the Windows community. (theverge.com) (guidingtech.com) (elevenforum.com)

Source: GB News Windows 11 is adding new photo editing tricks with AI, and there's a way to unlock them early on your PC
 

Microsoft is quietly moving generative AI from apps into the very heart of Windows by adding an AI Actions submenu to File Explorer — a right‑click surface that lets you run visual search or apply quick, model‑driven image edits without opening a separate editor. The capability, visible in the Windows 11 Insider Preview Canary builds reported as Build 27938, initially supports four image-focused actions — Bing Visual Search, Blur Background, Erase Objects, and Remove Background — and is paired with a new Settings surface that shows which third‑party apps have recently used Windows‑provided generative AI models. (blogs.windows.com)

A transparent monitor shows floating AI tools for image editing and text/image generation.Background​

File Explorer has been one of Windows’ most stable and central interfaces for decades. Microsoft’s recent strategy has been to surface capabilities where users already interact with files instead of forcing context switches into separate apps. The AI Actions experiment continues that trajectory: rather than opening Photos or Paint and hunting through menus, a user can right‑click a JPG/PNG in File Explorer and pick an AI‑driven task directly from the context menu. Early documentation and blog posts from Microsoft emphasize staying in your flow by turning routine edits and lookups into one‑click micro‑workflows. (blogs.windows.com)
This shift fits into a broader pattern of embedding AI across Windows — from Copilot integrations to on‑device model surfaces on Copilot+ PCs. Microsoft is testing these actions with Windows Insiders and using staged rollouts and server‑side gating, so visibility and behavior will vary between devices and Insider channels. (theverge.com)

What Microsoft shipped (the practical list)​

Image actions available in the preview​

  • Bing Visual Search — Use the selected image as a query to find visually similar images, identify landmarks, plants, or products, and discover web pages that contain the same image. (blogs.windows.com)
  • Blur Background — Launches the Photos app with an automated subject/background separation and a one‑click blur. You can adjust intensity and refine the effect with a brush. (blogs.windows.com)
  • Erase Objects (Generative Erase) — Opens the Photos app’s generative edit flow to select and remove distracting elements; the model fills and blends the removed area. (blogs.windows.com)
  • Remove Background — Routes the image to Paint’s automatic background‑removal pipeline to produce a subject cutout in one click. (blogs.windows.com)
These actions were announced as supporting .jpg, .jpeg, and .png at introduction. Microsoft framed these as an initial, image‑focused set with plans to add document actions (summarize, generate FAQ, etc.) for Microsoft 365 files in future rollouts. (blogs.windows.com)

Settings and visibility controls​

A companion Settings page — Settings > Privacy & security > Text & image generation — exposes which third‑party apps recently used Windows‑provided generative AI models and provides per‑app toggles to manage access. This is Microsoft’s first visible attempt to put governance and transparency directly into the OS as generative features expand. (blogs.windows.com)

How AI Actions are implemented (technical overview)​

AI Actions are implemented as shell hooks — context‑menu launch points inside File Explorer that either:
  • Hand a file to a first‑party app (Photos, Paint) with an edit already staged, or
  • Invoke a platform API to run a quick model operation (local or cloud) and return a small preview or result.
Because the actions reuse existing apps and services (Photos, Paint, Bing Visual Search), the quick flows are effectively orchestration layers that make existing AI capabilities discoverable in the file manager instead of buried inside apps. That design reduces duplication but also means the quality and capability of each action depend on the underlying app and which models it uses.
Local vs cloud execution — the tricky part
  • Microsoft’s public notes and hands‑on reporting indicate the platform uses a hybrid model: some workloads can run on‑device (notably on Copilot+ hardware with an NPU), while others may fall back to cloud processing. Microsoft has not published a full per‑action locality matrix, so whether an action runs locally or sends data to Microsoft systems will depend on hardware, software versions, user settings, and licensing. This ambiguity is important for expectations around latency, privacy, and offline operation. (blogs.windows.com)

Why this matters — productivity and UX benefits​

  • Fewer context switches. One‑click edits from File Explorer mean small jobs no longer require launching editors and navigating toolsets.
  • Better discoverability. Surfacing AI edits in a familiar right‑click menu reduces the learning curve for users who rarely open advanced photo tools.
  • Faster micro‑workflows. Tasks like background removal or quick object erases are often minute‑scale chores; bringing them to the shell can save many small chunks of time cumulatively.
  • Unified entry point. By aggregating visual search and in‑app edits under one menu, Microsoft creates a single, discoverable place to get answers or do quick fixes.
Industry hands‑on coverage backs these claims: reviewers who tested early builds report that the experience does accelerate casual image edits and discovery workflows, especially for users who create social posts or presentation assets frequently. (theverge.com)

Risks, caveats, and limitations​

Channel instability and staged rollouts​

These features are coming through the Canary/Dev/Beta Insider channels and are heavily server‑gated. That means:
  • A build number (e.g., 27938) is community‑reported for specific experiments but may not be consistently visible on all devices running the same build. Expect A/B tests, feature‑flag toggles, and frequent UX changes during previews.

Technical limitations​

  • File format support is limited: at introduction, only JPG/JPEG and PNG are supported for AI actions. RAW, PSD, TIFF, and other pro formats are not supported in Explorer quick flows, which keeps professional workflows in full editors for now. (blogs.windows.com)
  • Model quality and artifacts: generative erase and background removal may introduce blending errors, texture mismatches, or artifacts — acceptable for casual users but potentially problematic for production images. Test outputs before relying on them for client work.

Privacy and data flow concerns​

  • Because some actions may use cloud processing or central APIs, organizations should treat these capabilities as potential data egress vectors. The new Settings page provides visibility into which apps used Windows‑provided generative models, but it does not (yet) replace enterprise governance tools for controlling network egress or model telemetry. Administrators will want to combine the OS toggles with MDM/Group Policy, network controls, and policy guidance. (blogs.windows.com)

Licensing and hardware gating​

  • Document‑level AI actions (summarize, create FAQ) are being rolled out with Microsoft 365/Copilot licensing in mind; some features will initially target commercial Copilot tenants before consumer availability. Some enhanced on‑device execution is optimized for Copilot+ hardware with NPUs and may be gated on those devices. In short: the best local performance and the broadest document features likely require specific hardware and licensing. (blogs.windows.com)

Enterprise and IT implications — what to do now​

Organizations should approach File Explorer AI Actions deliberately and treat them like a new OS feature that affects user behavior, privacy, and support:
  • Test in a controlled environment: evaluate UI behavior, performance, and artifacts on representative hardware.
  • Review the new Settings page (Text & Image Generation) and use per‑app toggles for sensitive machines. (blogs.windows.com)
  • Use MDM/Group Policy and network controls to limit cloud model access where required. Consider blocking unknown or risky endpoints and applying conditional access controls for Copilot/Microsoft 365 services.
  • Update security baselines and remediation playbooks to account for new data egress vectors (images, documents, metadata).
  • Educate end users on expected limitations (artifacts, file type support) and how to verify outputs before publishing or external distribution.
System administrators should also watch for policy updates from Microsoft that add granular controls (MDM/Intune, Group Policy) for AI platform features—those controls are likely to arrive once the features move beyond Canary/Dev into broader channels.

For power users: how to try it and a caution​

If you want to try AI Actions today:
  • Join the Windows Insider Program (Dev, Beta, or Canary depending on the timing) and update to a build that includes AI Actions. (blogs.windows.com)
  • Right‑click a supported image (.jpg/.jpeg/.png) in File Explorer and look for the AI Actions submenu.
  • Use the Settings page (Privacy & security → Text & image generation) to review apps that used Windows AI models.
A commonly published workaround is to use third‑party tools like ViveTool to flip feature flags manually if the menu does not appear; that approach is unsupported and can produce unstable behavior. Use such tools only in test VMs and not on production or corporate devices. Guiding Tech and other hands‑on guides outline the steps and risks of ViveTool for early enablement. (guidingtech.com)

Broader ecosystem view and competitive context​

Microsoft’s move to put AI into File Explorer is part of a broader arms race among OS vendors to make AI feel native rather than siloed. Google, Apple, and OEMs are also quickening efforts to surface AI across system experiences (search, photos, device settings). For Microsoft, the advantage is scale: File Explorer is a near‑universal entry point for Windows users, which makes it a logical lever for broader adoption of OS‑level AI affordances. But that advantage comes with responsibility — balancing usefulness, privacy, and enterprise manageability. (techradar.com)

Critical analysis — strengths and notable risks​

Strengths​

  • High discoverability: Integrating AI into the right‑click menu exposes capabilities to users who would never open dedicated apps. That democratizes simple edits.
  • Flow preservation: The ergonomics are strong — File Explorer is the right place to put lightweight file operations, and the design reduces clicks and waiting.
  • Platform leverage: By orchestrating Photos, Paint, and Bing Visual Search, Microsoft avoids reinventing wheels and reuses mature tooling while adding orchestration logic in the shell.

Risks and blind spots​

  • Ambiguous data flow: Microsoft has not published a detailed per‑action locality guarantee. Without a clear guarantee of on‑device processing, organizations must assume some actions may send data to cloud endpoints under certain conditions. That uncertainty complicates risk assessments for regulated data. (blogs.windows.com)
  • Feature gating fragmentation: Hardware and licensing gates (Copilot+, Microsoft 365 Copilot) create a two‑tier experience that may frustrate users who see features in marketing but cannot access them on their machines.
  • Support surface expansion: Adding AI to the shell increases the attack surface and support complexity. Corrupted thumbnails, erroneous edits, or unexpected behavior during Canary testing can create real helpdesk tickets.
  • Potential for misuse: Easy tools for removing objects or altering images lower the barrier for image manipulation and misinformation in contexts where provenance matters. Organizations that require traceable, auditable images should treat edited images with caution.
Whenever Microsoft’s public statements are silent or ambiguous (for example, which specific actions always run locally), treat the locality claim as unverified until Microsoft clarifies the technical guarantees. Users and admins should flag those unknowns in risk assessments.

Practical recommendations (short checklist)​

  • For home users:
  • Try AI Actions on non‑critical images, and inspect results before sharing.
  • Use the Settings page to audit which apps are using Windows models. (blogs.windows.com)
  • For IT and security teams:
  • Evaluate builds in a lab before rolling to users.
  • Combine the Settings per‑app toggles with network and MDM controls.
  • Update security guidance to account for image/document edits originating from File Explorer.
  • Communicate to users how to verify edits and preserve originals (keep backups).

What to watch next​

  • Expansion to more file types (RAW, PSD) or deeper, non‑destructive workflows remains a likely next step, but will require architecture changes to preserve metadata and histories.
  • New enterprise controls (Intune/Group Policy) that allow admins to block or audit AI Actions will be crucial for broader adoption in regulated environments.
  • Clearer documentation from Microsoft about local vs cloud model execution per action will be necessary to remove ambiguity from privacy and compliance analyses. (blogs.windows.com)

Microsoft’s File Explorer experiment is an important signal: the company intends to make AI a first‑class ingredient of the OS rather than an optional bolt‑on. For everyday users, the immediate gains are obvious — faster micro‑edits and inline visual search. For IT teams and privacy‑sensitive organizations, the change raises new questions about data flows, licensing gates, and manageability. The balance between convenience and control will determine whether AI Actions become a universally welcome productivity boost or a feature that enterprises choose to disable until governance catches up. (windowscentral.com)

Source: Red Hot Cyber Windows 11: Microsoft Revamps File Explorer with Artificial Intelligence
 

Back
Top