• Thread Author
Microsoft’s latest Canary-flight experiment stitches small, familiar conveniences into a broader push to make generative AI an everyday part of the Windows shell: right‑click inside File Explorer and you may now see an AI actions submenu that offers visual search and one‑click image edits, Settings gains a Text and image generation privacy surface to show which apps have recently invoked on‑device generative models, and the Notification Center can again display a larger clock that includes seconds — a tiny change that many users have asked for since the first Windows 11 release.

Windows 11 desktop with layered privacy settings panels floating over a blue wallpaper.Background​

Microsoft is staging a long, multi‑channel test of Windows 11 features across Canary, Dev, Beta, and Release Preview rings. Canary builds are explicitly experimental and frequently contain server‑side gated features — code may be present while the user‑visible experience remains off until Microsoft widens the rollout. That staging model explains why some Insiders see AI actions and the Notification Center clock while others on the same numeric build do not. Community reporting has associated these experiments with a Canary‑channel flight reported as Build 27938, but Flight Hub and official blog posts can lag behind hands‑on reports, so treat build numbers and precise rollout timings with cautious confirmation.
Microsoft’s approach is clear: instead of confining generative features to single apps, the company is testing ways to surface AI as micro‑workflows where users already work — File Explorer for file‑level edits and visual lookup, Settings for privacy controls, and Notification Center for a familiar clock display. The experiment tests two linked goals: reducing friction for common tasks and demonstrating platform‑level control over on‑device generative capabilities. (theverge.com) (blogs.windows.com)

What’s new in the build: a concise inventory​

  • A new AI actions entry in File Explorer’s right‑click (context) menu for supported image files, exposing visual search and image edit flows (Blur background, Erase objects, Remove background). (theverge.com)
  • A Text and image generation privacy surface in Settings (Privacy & security) that lists apps which recently used Windows‑provided generative AI and offers per‑app toggles. (blogs.windows.com)
  • A reintroduced option to show seconds in the Notification Center clock (Settings > Time & language > Date & time → Show time in the Notification Center). (pureinfotech.com, allthings.how)
  • Numerous stability and reliability fixes typical of Canary flights; as with other Insider releases, many features are server‑gated and may not be visible on every device.
These items are small individually but reveal a strategic pattern: make AI actionable and make AI visible (so users and admins can govern it).

AI actions in File Explorer — what you’ll actually see​

How the UI surfaces AI​

When the feature is enabled for your device, right‑click a supported image in File Explorer and you’ll find an AI actions submenu. The current, reported image actions are:
  • Bing Visual Search — use the image as the query: find similar images, shopping results, landmarks, or extract text. This routes the image to Bing Visual Search and returns results without manual upload. (theverge.com)
  • Blur Background — opens Photos and automatically detects subject and background; you can apply a blur with adjustable intensity or refine using a brush. (theverge.com)
  • Erase Objects — invokes a generative erase workflow in Photos to remove selected distractions from the image. (theverge.com)
  • Remove Background — launches Paint’s automatic background removal to produce a one‑click cutout of the subject. (theverge.com)
These actions either open the host app (Photos or Paint) with the edit staged, or run a quick model‑driven operation and present a preview. Microsoft positions these as shortcuts to existing capabilities rather than embedding a full editor inside Explorer.

Supported file formats and early limitations​

At launch in the reported Canary experiments, the quick actions are limited to common raster formats: .jpg, .jpeg, and .png. RAW, PSD, TIFF and other professional formats are not reliably supported in these quick flows yet. That restriction matters: many professional photographers and designers rely on non‑destructive formats and deeper metadata workflows that Explorer’s one‑click actions do not (yet) address.

Where the models run (local vs cloud) — the ambiguous part​

Microsoft documents and community reporting indicate some generative actions can be executed on‑device (especially on Copilot+ hardware with an NPU), while others may fall back to cloud processing or require Copilot / Microsoft 365 entitlements for richer capabilities. The current Canary rollout mixes local and cloud approaches depending on hardware, licensing, and the specific action. Because Microsoft gates parts of the experience server‑side, the exact execution path for a given action on your device is not always obvious without hands‑on telemetry. Treat this as a key follow‑up question: where did my image go, and was inference local or cloud? (theverge.com)

Privacy and control: Text and image generation settings​

Microsoft added a new Settings page — Settings > Privacy & security > Text and image generation — that surfaces recent app activity for the OS‑provided generative AI features. The page lists apps that requested access to Windows‑provided models in the recent days and provides controls to allow or block those apps from using local generative capabilities. Microsoft frames this as a visibility and control step: users can see which third‑party apps invoked generative models and decide whether to permit them. (blogs.windows.com, elevenforum.com)
Key characteristics of the Settings surface:
  • Recent activity list: shows which apps requested generative AI access in the past seven days (or an interval Microsoft documents). (elevenforum.com)
  • Per‑app toggles: allow users to block or allow specific apps from using the OS generative functionality. (elevenforum.com)
  • MDM / Group Policy: enterprise controls are present or planned so admins can govern this capability centrally. (elevenforum.com)
This is a meaningful step: one of the major concerns as generative AI becomes platform‑level is the emergence of opaque, user‑hosted inference flows that are hard to audit. The Settings page gives an entry point for accountability, but it is visibility, not a full governance stack yet. Administrators should expect more enterprise features (auditing, telemetry exports, explicit retention policies) to follow. (blogs.windows.com)

Caveats and what to watch for​

  • The Settings surface can show presence (an app asked for generative capabilities) but does not always indicate what was sent or retained, nor the exact inference location unless Microsoft exposes those details. That limit matters for regulated data.
  • Turning off the Text and image generation toggle restricts apps from using Windows‑provided local models but does not block apps that implement their own cloud services; administrators should treat this as one control among many. (elevenforum.com)

The Notification Center clock with seconds — small change, real utility​

After requests from long‑time users, Windows 11’s Notification Center can again display a larger clock that includes seconds. The option appears at Settings > Time & language > Date & time as Show time in the Notification Center. When enabled, the Notification Center flyout shows HH:MM:SS above the date and calendar, restoring a Windows 10 habit many users missed. Practical uses include scripting and troubleshooting where second‑level precision matters. (pureinfotech.com, allthings.how)
Microsoft rolled this back in gradually and it remains an opt‑in preference for those who want extra motion in the UI. If you don’t see the toggle on your Insider build, community guides show how advanced users have used ViveTool to flip the underlying feature flags — a third‑party approach that is not officially supported and carries risk. (clickthis.blog)

Strengths: why this direction matters​

  • Reduced friction for micro‑tasks: One‑click edits and visual lookups inside File Explorer cut context switches. For quick image cleanup or a fast visual search, the new flows are faster than opening an editor, importing, editing, and saving.
  • Platform‑level AI governance: Exposing a Text and image generation settings page is an important early step toward responsible AI — visibility into which apps invoked local generative models helps users and admins audit usage. (blogs.windows.com)
  • Usability responsiveness: Restoring the seconds display addresses a long list of accessibility and power‑user requests; small, visible wins like this build goodwill while Microsoft tests larger AI integrations.

Risks and limitations — what to be cautious about​

  • Ambiguous data paths: In many cases the community cannot yet confirm whether inference happened locally or in the cloud — a crucial distinction for privacy and compliance. Microsoft’s documentation indicates a mix of on‑device and cloud processing depending on hardware and licensing, but the exact behavior can vary by device and action. Until Microsoft publishes clearer execution guarantees, cautious users should avoid processing sensitive material. (theverge.com)
  • Feature gating and inconsistent experiences: Canary builds are experimental. The same build number can present different features across devices because of server‑side toggles. That makes it hard for IT teams to reproduce or test uniformly.
  • Limited file type support: The initial File Explorer actions support only common raster formats (JPG/JPEG/PNG). Professionals using RAW or layered formats won’t get the same benefit.
  • Licensing and hardware fragmentation: Some document summarization and advanced Copilot integrations are gated by Microsoft 365/Copilot licenses or Copilot+ hardware with NPUs. That creates a fragmented user experience and could complicate rollout planning. (theverge.com)

Enterprise perspective: advice for IT and security teams​

  • Plan pilots, not broad deployments. Canary experiments can be volatile. Use isolated pilot machines for evaluation.
  • Confirm execution paths. For any workflow that touches regulated data, require documentation that the inference happens locally and that data is not retained in cloud logs before approving widespread use. If Microsoft does not provide clear guarantees, block the capability via policy.
  • Use the new Settings surface to monitor app activity. The Text and image generation page gives immediate visibility — validate the Recent activity logs against your own app inventory. (elevenforum.com)
  • Leverage Group Policy / MDM controls. Microsoft has exposed policies to allow or deny apps access to system AI models; integrate these into configuration baselines and deployment testing. (elevenforum.com)
  • Educate users. If users can right‑click and trigger AI edits, they need clear guidance about what data is safe to process and which apps are authorized. Training reduces accidental data exposure.

Practical tips for Insiders and enthusiasts​

  • To check whether your device shows the Notification Center seconds toggle: open Settings → Time & language → Date & time → Show time in the Notification Center. If it’s missing, the feature may be server‑gated for your device. (pureinfotech.com)
  • If you want to try AI actions in File Explorer, look for an AI actions submenu when right‑clicking a .jpg/.jpeg or .png file. Expect the action to either open Photos/Paint or launch a small preview before saving. (theverge.com)
  • Avoid processing regulated or sensitive images until you confirm where inference occurs and what retention policies apply. Use test images for evaluation.
  • Administrators: audit the Settings > Privacy & security > Text and image generation → Recent activity page to spot unauthorized app use of generative capabilities. (elevenforum.com)

The bigger picture — where Microsoft is heading​

This set of changes is tactical rather than transformational. On the surface, they’re small: context‑menu shortcuts and a seconds display. But strategically they matter: Microsoft is moving toward OS‑level AI affordances — the ability for apps and the shell to call into platform models, combined with platform controls for visibility and governance.
Expect the following trajectory over the coming quarters:
  • Expanded File Explorer actions beyond basic image edits: document summarization, list generation and Copilot interactions tied to OneDrive/SharePoint files (likely gated to Microsoft 365/Copilot commercial customers initially). (theverge.com)
  • Greater hardware optimization for on‑device inference (Copilot+ PC features), with clear fallbacks to cloud when NPU resources aren’t available. (techcommunity.microsoft.com)
  • A steady addition of administrative and auditing tools in Settings and MDM to bring enterprise governance in line with platform AI capabilities. (elevenforum.com)
If Microsoft pairs convenience with transparent documentation — explicitly stating when models run locally, what data is transmitted, what is logged, and how long artifacts are retained — the platform‑level approach can be a major productivity win. If it releases features before documentation and enterprise controls catch up, it will create friction and risk for business customers.

Final verdict​

The Canary‑channel experiment that surfaces AI actions in File Explorer, restores a seconds clock in Notification Center, and adds a Text and image generation privacy surface is a clear preview of Microsoft’s OS‑level AI strategy: embed useful micro‑workflows where users already operate, and begin layering controls so those workflows can be governed.
These changes are practical and often useful for day‑to‑day tasks, but they arrive amid ambiguity: server‑side gating, hardware and licensing fragmentation, and some outstanding questions about execution locality and retention. Insiders should test these features on isolated hardware and validate execution paths; administrators should treat the Settings privacy surface as a first line of defense while demanding stronger audit and policy capabilities.
The feature set is promising; the platform’s ultimate value will depend on Microsoft’s ability to pair convenience with clear, verifiable guarantees about where inference runs and how data is handled. Until those guarantees are routine, the right stance is cautious curiosity: try the new flows, but do not assume they are safe for sensitive content without confirmation.

Quick checklist: what to try and what to lock down now​

  • Try: Right‑click a .jpg or .png in File Explorer and test Bing Visual Search, Blur background, Erase objects, and Remove background to understand the user flow and app handoff. (theverge.com)
  • Verify: Inspect Settings → Privacy & security → Text and image generation → Recent activity after performing edits to confirm the app shows up in logs. (elevenforum.com)
  • Lock down: For enterprise images, set Group Policy/MDM to deny use of system AI models until you have documentation that inference is local and non‑retained. (elevenforum.com)
  • Opt‑out: If you prefer the minimalist Notification Center, leave Show time in the Notification Center off; enable it only if you need seconds for timed tasks. (pureinfotech.com)
This build is a useful preview of the future Windows desktop where AI assists at the point of work. The features are not yet universal, execution details remain mixed, and the enterprise roadmap needs more polish — but the experiment points to an OS that will increasingly treat generative models as first‑class services.

Source: BetaNews Latest Windows 11 build sees Microsoft adding AI to File Explorer and improving clock options
 

Back
Top