• Thread Author
Microsoft's newest Canary‑channel experiment folds small but consequential AI workflows directly into the Windows 11 shell: a right‑click “AI actions” submenu in File Explorer that surfaces visual search and one‑click image edits, accompanied by a returning seconds clock in the Notification Center and a new Settings surface that shows which apps recently used Windows‑provided generative AI models.

Windows desktop with translucent AI actions panel and Bing Visual Search overlay.Background​

Windows has been quietly reframing where intelligence belongs in the OS: not just in a single Copilot app, but as micro‑workflows embedded where users already interact with files. File Explorer is the logical place for that shift — it’s the daily touchpoint for discovering, organizing, and preparing content. The latest Insider experimentation ties together three threads: discoverability (easy access to AI tools), convenience (reduce app switching for routine edits), and visibility (give users and administrators insight into which apps are invoking generative AI).
Microsoft’s Canary‑channel preview commonly serves as the earliest visible surface for such experiments, but features are often server‑gated and staged; not every Insider on the same numeric build will necessarily see the same features. Community reporting has associated the current cluster of experiments with Windows 11 Insider Preview Build 27938, but that build‑to‑feature mapping is subject to change until Microsoft confirms it in official Flight Hub notes. Treat the build number as community‑reported for now.

What shipped in this Canary flight — an overview​

At a glance, the visible pieces in this Canary experiment are:
  • A new AI actions entry in File Explorer’s right‑click (context) menu for supported image files that exposes four image‑focused tools: Bing Visual Search, Blur Background, Erase Objects, and Remove Background. These flows either stage edits in Photos or Paint, or launch a visual lookup using Bing.
  • A reintroduced larger Notification Center clock that shows seconds, exposed via Settings → Time & languageDate & timeShow time in the Notification Center. This is an opt‑in toggle designed to restore a small but frequently requested convenience from older Windows releases.
  • A new Text and image generation (or generative AI) privacy surface under Settings → Privacy & security that lists third‑party apps which recently used Windows‑provided generative AI models, offering per‑app toggles and initial transparency controls for administrators and users.
Together these items show Microsoft’s twofold strategy: make AI actionable at the file level, and make generative AI visible and controllable at the OS level.

Deep dive: AI actions in File Explorer​

What the new right‑click menu offers​

When visible on a device running the relevant Insider flight, right‑clicking a supported image file (.jpg, .jpeg, .png) may reveal an AI actions submenu that groups several one‑click flows:
  • Bing Visual Search — use the image itself as the query to find visually similar images, products, landmarks, or extract information from the scene.
  • Blur Background — invokes Photos and applies an automatic portrait‑style background blur, with sliders or brush tools to fine‑tune the result.
  • Erase Objects — calls Photos’ generative inpainting/erase capability so you can select unwanted elements and remove them (the AI fills the removed area).
  • Remove Background — sends the image to Paint’s background removal pipeline to produce a subject cutout in one click.
The menu is context‑aware: the offered actions change depending on file type (images vs. documents), and some actions hand the file off to existing first‑party apps (Photos, Paint) with the operation already staged so you return quickly to your workflow.

Supported file formats and limits​

Initial testing reports indicate the Explorer quick actions work with common raster image formats — .jpg, .jpeg, and .png — while professional or RAW formats (PSD, TIFF, camera RAW) are not reliably supported in these quick flows. Document‑level actions (summarize, create an FAQ) are on the roadmap and will target Microsoft 365 files first, with some features expected to be gated behind Copilot/Microsoft 365 licensing during initial rollouts.

How the flows execute (local vs cloud)​

Microsoft’s current implementation appears hybrid: some actions may run locally (especially on Copilot+ hardware with NPUs), while others will fall back to cloud processing depending on device capability, installed app versions, and Microsoft’s runtime policies. Microsoft has not published a comprehensive per‑action locality guarantee for every scenario; assume hybrid behavior until explicit locality documentation is provided. This ambiguity matters for both privacy and performance expectations.

Walkthrough: each AI action examined​

Bing Visual Search​

Bing Visual Search treats the image as the query token. For everyday users, this offers instant research: find a product page from a screenshot, identify landmarks or plants, or discover the source of a photo without opening a browser and manually uploading the image. It’s a convenience win for content creators, shoppers, and researchers who frequently work with screenshots or downloaded images.
Caveats: Visual search results are web‑computed and will involve network requests. If your workflow revolves around sensitive or private imagery, prefer local tooling until Microsoft clarifies data handling practices for visual search.

Blur Background​

The Blur Background action leverages Photos’ subject‑segmentation to apply a portrait blur. The quick flow aims to reduce the friction of creating a social‑ready image: right‑click → Blur Background → adjust → save. For users who frequently prepare images for presentations or social posts, this will shave time from repetitive edits.
Caveats: Automated segmentation can fail on complex subjects or cluttered scenes; Photos exposes sliders and brush tools, but advanced retouching still requires a full editor.

Erase Objects (Generative Erase)​

Generative erase lets users highlight or select distracting elements and have the model inpaint the background plausibly. This is functionally similar to tools such as Google Photos’ Magic Eraser or Adobe’s content‑aware fill, but surfaced as a one‑click context‑menu action to reduce context switches.
Caveats and risks: Generative inpainting sometimes produces artifacts or unrealistic fills, especially with complex textures. For critical editorial or forensic work, manual editing and review remain essential. There are also privacy considerations if the erase operation is executed in the cloud.

Remove Background​

This action triggers Paint’s automatic background removal pipeline to produce a subject cutout. It’s a pragmatic tool for quick collages, thumbnails, or slides when you need a transparent subject without opening a full editor.
Limitations: Automatic cutouts are convenient but imperfect; edge artifacts and missed hair strands are common failure modes that require manual cleanup.

Privacy, visibility, and governance​

Microsoft accompanies the Explorer experiments with a new Text and image generation page under Settings → Privacy & security that lists third‑party apps which recently used Windows‑provided generative AI models and provides per‑app toggles for blocking access. This is a notable step toward transparency: rather than leaving AI invocations invisible, Windows will surface recent generative activity and offer initial controls.
However, the current controls are visibility‑focused and represent a first step. Enterprises will likely demand:
  • Per‑action policy enforcement (not just per‑app toggles), so administrators can restrict specific capabilities such as cloud offload or document summarization.
  • Comprehensive audit trails and retention policies for AI invocations, to meet compliance and eDiscovery needs.
  • Clear documentation of data flows (which actions run locally vs. cloud, and what telemetry is captured). At present, Microsoft’s public guidance emphasizes staged rollouts and hybrid execution, not a full per‑action locality guarantee. Treat locality claims cautiously until Microsoft provides specifics.
Flagged claim: community reports tie the UI to Build 27938, but Canary flights are heavily server‑gated; the exact build‑to‑feature mapping should be verified against official Flight Hub or Windows Insider blog notes if you require absolute certainty.

Enterprise and IT implications​

For IT professionals, the most consequential elements are control, auditability, and fragmentation:
  • Control: The per‑app toggle is a starting point, but enterprises will expect MDM/Group Policy controls that can restrict generative AI capabilities by action or file type. Early reporting indicates Microsoft intends to extend manageability, but the degree of granularity and timing remain unclear.
  • Licensing fragmentation: Document‑level actions (summarize, create FAQ) are expected to be gated by Microsoft 365 Copilot licensing in early rollouts, which can create different experiences across commercial and consumer machines. That complicates policy planning and user expectations in mixed‑license environments.
  • Audit and compliance: Enterprises will ask whether AI invocations are logged centrally, how long payloads are retained, and where data is stored. The current Settings surface provides initial visibility, but formal audit APIs and SIEM integrations will be necessary for broad enterprise adoption.
  • Deployment risks: Canary builds can be unstable and may introduce regressions. Test on non‑production devices, and treat any ViVeTool or unofficial flag toggles as experimental and unsupported in production.

Performance and hardware considerations​

Microsoft’s approach indicates a hybrid execution strategy that can take advantage of on‑device NPUs when available (Copilot+ hardware) and default to cloud endpoints otherwise. That means:
  • Machines with dedicated AI hardware may enjoy faster, lower‑latency edits and more local processing.
  • Older machines will still access the features, but may rely on cloud processing, increasing latency and generating network traffic.
Practical advice: keep Photos and Paint updated from the Microsoft Store, because the Explorer quick actions orchestrate edits by launching or calling into those apps; mismatched app versions can cause failures or missing options.

How to try this now (Insider steps)​

  • Join the Windows Insider Program and enroll the device in the Canary channel. Expect instability and staged rollouts.
  • Update Windows to the latest preview build available to your Insider ring; community reports have associated these experiments with Build 27938, but server‑side gating means the feature may not appear even on that build.
  • Right‑click a supported image (.jpg/.jpeg/.png) in File Explorer and look for AI actions in the context menu. If visible, test Bing Visual Search, Blur Background, Erase Objects, and Remove Background.
  • To enable the Notification Center clock with seconds, go to Settings → Time & language → Date & time and toggle Show time in the Notification Center if present.
  • Check Settings → Privacy & security → Text and image generation for the new per‑app visibility controls that list which apps have accessed Windows‑provided generative models.
Note: community guides and ViVeTool IDs circulate for toggling experimental features, but using those is unofficial and risky; only attempt them on test hardware.

Benefits — what’s good about this approach​

  • Reduced context switching: small edits that used to require launching an app and hunting for a tool are now reachable with a single right‑click, saving time over many daily micro‑tasks.
  • Discoverability: embedding AI actions in the familiar right‑click menu surfaces capabilities to mainstream users who may not routinely open Copilot or advanced editors.
  • Leverages existing investments: Microsoft routes actions through Photos, Paint, and Bing Visual Search rather than reinventing the wheel, maintaining a consistent platform approach.

Risks and open questions​

  • Privacy and data locality ambiguity: Without per‑action locality guarantees, users cannot be certain when image data is sent to cloud services. For sensitive materials, that is a material concern and Microsoft must clarify data flows.
  • Enterprise manageability gap: Initial Settings controls are promising, but enterprises will require fine‑grained policies, audit logs, and SIEM hooks to trust the functionality at scale.
  • Feature fragmentation: Copilot‑gated document features could split the user experience, creating inconsistent feature availability across consumer and commercial environments.
  • Context menu clutter: Power users prize a lean File Explorer; adding more menu entries risks clutter unless Microsoft provides robust customization options.
  • Stability: Canary experiments can introduce regressions. Organizations and cautious users should avoid enabling Canary builds on production machines.
Where claims are tentative: the exact build number association (Build 27938) and per‑action locality behavior are community‑reported and should be verified against Microsoft’s Flight Hub and official Insider posts before making operational decisions.

Practical recommendations​

  • Individuals: try the feature on a test device to see whether the convenience gains match your workflow. For sensitive imagery, prefer local editing or verify the Settings privacy surface after use.
  • Power users: if you dislike an expanded context menu, wait for Microsoft to expose customization options or use documented Registry/ViVeTool approaches only on non‑production systems and with caution.
  • IT admins: monitor Microsoft’s official documentation for Group Policy and MDM controls. Request clarity from Microsoft on audit logging, data retention, and per‑action locality before approving wide deployment.

Conclusion​

Microsoft’s File Explorer experiments represent a pragmatic rethinking of where AI lives in daily computing: not as an isolated feature behind an app icon, but as a set of micro‑workflows accessible where users already work. The right‑click AI actions menu, the restored Notification Center seconds clock, and the new generative‑AI visibility surface signal a design philosophy that prizes flow and transparency. Early impressions show clear productivity wins for common image tasks and research workflows, while raising important questions about privacy, locality, enterprise manageability, and fragmentation as document‑level AI actions roll out with licensing gates.
The Canary experiment is a useful preview of how Windows may fold intelligence into the shell: incremental, platform‑centric, and user‑facing. The next steps to watch are explicit locality guarantees, richer enterprise controls, and the graduation of these experiments from Canary to broader Insider channels and finally to general availability. Until Microsoft provides fuller documentation and formal Flight Hub confirmation of build details, testers and administrators should proceed cautiously and treat the current flows as a promising but incomplete preview of Windows’ AI‑first future.

Source: TechWorm Microsoft Experiments With AI Tools In Windows 11 File Explorer
 

Back
Top