
On September 8, 2025 Microsoft pushed a fresh Canary-channel flight—reported as Build 27938—that strings together a set of small-but-significant UI and AI experiments. The visible pieces are straightforward: a new “AI actions” entry in File Explorer’s right‑click menu that surfaces image edits and visual search, a resurrected larger Notification Center clock that can show seconds, and a Settings page that makes on‑device and OS‑provided generative‑AI usage more visible to users and admins.
Taken together the changes aren’t about a single headline feature; they’re a reveal of strategy: make AI a first‑class capability inside the shell (File Explorer and Settings), reduce friction for tiny tasks, and begin adding administrative/visibility controls for generative AI. For Insiders and power users this flight is useful to test and critique—especially on privacy, enterprise manageability, discoverability, and performance. Below I unpack what’s in the build, how the features work today, caveats and known limitations, step‑by‑step checks and safety advice, and the implications for IT and power users.
What Microsoft shipped in this Canary flight (quick checklist)
- File Explorer: new AI actions surfaced in the right‑click (context) menu for supported image files — launch points for Bing Visual Search and app‑backed edits (Blur background, Erase objects, Remove background).
- Notification Center: an option to return a larger clock with seconds above the calendar (toggle exposed in Settings).
- Settings: a new “Recent activity” view under Privacy & security → Text and image generation that lists apps which have recently used Windows‑provided generative AI APIs.
- Miscellaneous: a collection of File Explorer fixes and thumbnail improvements; continuation of staged rollouts and server‑side feature gating.
This build illustrates two linked trends in Windows’ ongoing evolution:
1) AI-as-actionable-tooling at OS level — instead of putting editing and AI features inside single apps, Microsoft is experimenting with surfacing them at the place where users already interact with files (File Explorer). That reduces context switches for micro‑edits: right‑click → action → back to file operations.
2) Transparency and control for generative AI — the Settings surface signals Microsoft recognizes privacy and governance questions as AI becomes an OS capability. Visibility is the first step; control (policies, toggles, MDM/Group Policy) will be needed next.
Deep dive: AI actions in File Explorer — what you’ll see and how it works
What appears in the UI
- When you right‑click a supported image file (today, JPG/JPEG and PNG), you may see an AI actions submenu in the context menu. That entry groups several one‑click flows:
- Visual Search (Bing Visual Search): runs the image as the query input and returns visually similar images or shopping/landmark results.
- Blur background: a quick Photos-based edit that blurs the detected background around the subject.
- Erase objects: invokes a generative “erase” flow (Photos) to remove unwanted elements.
- Remove background: uses Paint’s auto‑background removal to make a subject cutout.
- Some actions open the host app with the edit already staged; others run a quick cloud or local model operation and present a small preview before saving.
- The right‑click entry is essentially a shell hook that either:
- Launches an associated app (Photos, Paint) with a scripted edit or,
- Calls a Windows platform API (on‑device or cloud‑assisted) to perform the operation and then hands the result back to Explorer or the calling app.
- Not all edits are guaranteed to run locally. Microsoft uses a hybrid model—some generative workloads will run locally on Copilot+ hardware when available and permitted; others may fall back to cloud endpoints. The build surface does not always indicate which model ran.
- Some actions are thin wrappers around existing app features; they do not add new deep editing engines into Explorer itself.
- Initial availability appears targeted at common raster formats: .jpg/.jpeg and .png. Professional formats (RAW, PSD) are not reliably supported in this flow.
- Complex edits (high‑resolution large files, multi‑layer edits) may still require opening a full editor; the Explorer actions are optimized for quick micro‑tasks.
- The UI and action set are staged; Insiders on the same build may not all see the features due to server‑side gating.
- Quick background removal for a web thumbnail: right‑click the PNG → Remove background → export trimmed PNG (saves time vs. opening a full editor).
- Rapid privacy scrub: right‑click a photo → Erase objects → remove a license plate or face before sharing.
- On‑the‑fly research: right‑click a photo from your screenshots folder → Visual Search to check where the image first appeared online.
- Transparency is improving (a Recent activity UI now lists which apps requested Windows‑provided text and image generation in the last seven days), but that is visibility, not a guarantee of data locality or retention limits.
- Questions you should expect and verify before using AI actions on sensitive files:
- Does the operation run fully locally, or does the image get uploaded to a cloud endpoint for inference?
- If cloud‑based: which service receives the image, how long is it retained, and is it used for model training?
- What metadata is logged in the Recent activity view and in OS telemetry?
- For privacy‑sensitive workloads, test with non‑sensitive files first and inspect the Recent activity pane in Settings to see what is recorded.
1) Check your channel and build
- Only Canary‑channel Insiders or equivalent test builds may receive this flight. Confirm you are enrolled in the Canary channel and on a build that includes the changes.
2) Right‑click a supported image in File Explorer - Look for an “AI actions” or similarly labeled entry in the context menu. If it’s missing, it may be feature‑gated or your Photos/Paint apps are not up to date.
3) Update Store apps - Make sure Photos and Paint are updated via the Microsoft Store—many of these integrations rely on the app versions that support the actions.
4) Inspect privacy telemetry - Open Settings → Privacy & security → Text and image generation → Recent activity to see which apps have used Windows‑provided generative AI in the last seven days.
5) Toggle the Notification Center seconds clock - Settings → Time & language → Date & time → look for “Show time in the Notification Center” (toggle it on to get the larger clock with seconds in the Notification Center flyout).
- Server‑side feature gating: Microsoft frequently ships the code but enables features to subsets of devices via Control Feature Rollout. That means: even on the same build, two machines can behave very differently.
- Canary behavior: Canary builds are intentionally experimental—expect rough edges, occasional rollbacks, driver incompatibilities, or missing localization. Don’t run Canary on production hardware.
- Some AI actions and summarization features may be tied to licensing or entitlement (for example, Copilot/Microsoft 365 entitlements) or to Copilot+ hardware (on‑device NPU). That creates inconsistent experience across personal and managed machines.
- Accessibility and localization gaps: early flights sometimes omit full screen‑reader support or translated strings; file actions should be tested with assistive tech to confirm discoverability.
- Performance: small edits are lightweight, but batch operations or very high‑resolution processing can add CPU/GPU load or disk I/O. On older hardware, expect longer latencies.
- Security: adding OS-level model APIs increases surface area. Consider:
- Patch and driver management (AI code may touch media pipelines).
- Monitoring for unexpected data exfiltration in corporate environments.
- Ensuring your endpoint protection stack is compatible with the new build before wide deployment.
- Don’t deploy Canary to production. Use a staged pilot on test hardware.
- Audit and policy: Microsoft will likely ship Group Policy / MDM controls to disable or restrict AI actions and the OS-provided generative AI platform. For now, plan to:
- Identify in your compliance policy where OS-level AI calls are permissible.
- Test the Recent activity view to see what is logged and where logs are surfaced.
- Licensing: validate whether enterprise Copilot or Microsoft 365 entitlements are required for certain summarization or cloud‑assisted flows, and plan user communications accordingly.
- App compatibility: test line‑of‑business apps for integration regressions, particularly those that interact with image handling or shell extensions.
- Logging and SIEM: determine whether AI usage events can be surfaced to enterprise logging systems for auditability.
- Backups first: always image or snapshot test machines before upgrading Canary builds. Expect to roll back if something breaks.
- VM testing is your friend: run Canary previews in a virtual machine or on a non‑critical device.
- If you rely on a consistent user experience, don’t rely on Canary experiments to reflect the stable channel. Flight Hub remains the authoritative place to confirm builds and channels.
- Want to try hidden flags? Be cautious with third‑party flag tools (example: ViVeTool). They can enable unfinished features but also leave your system in a nonstandard state. Only use them on disposable test images and be ready to restore.
- Feedback path: file issues via Feedback Hub and include reproductions, steps, and screenshots—Insider feedback directly influences refinement and rollout decisions.
- Context‑menu clutter vs. discoverability: putting AI actions in the right‑click menu is convenient, but without good scoping and toggles it risks overloading the menu. Microsoft will need to provide user controls (hide/unhide, prioritize) and heuristics for when actions appear.
- Local vs cloud inference messaging: the UI should clearly indicate when an action uses local models vs cloud endpoints—users (and admins) need that clarity.
- Permission granularities: per‑action permissions are better than a global allow/deny. Ideally users or admins can permit Visual Search but block cloud‑assisted generative erases.
- Accessibility parity: the actions must be keyboard and screen‑reader friendly. That’s non‑negotiable for enterprise deployments and broad adoption.
- Incremental rollouts: expect Microsoft to iterate: more file types, more AI actions (text summarization for documents was signaled in some previews), and expanded management controls.
- App dependence: many of the actions are mediated by Photos and Paint app updates, so keep the Microsoft Store apps current.
- Policy & telemetry: watch for Group Policy / MDM controls and administrative templates that allow organizations to opt in/out.
- Wider availability: features that land in Canary are candidates for Dev/Beta then broad release—if they meet privacy, accessibility and manageability requirements.
- Test device only: use a VM or non‑critical hardware.
- Full backup or snapshot: before upgrading, capture a clean image.
- Update Microsoft Store apps: ensure Photos and Paint are at the latest versions.
- Inspect Settings: look at Privacy & security → Text and image generation → Recent activity to see what is logged.
- Test with non‑sensitive files: run AI actions on throwaway images to observe behavior and latency.
- Monitor telemetry and performance: watch Task Manager and system logs for unexpected CPU/GPU spikes.
- File Feedback: use Feedback Hub to report problems and UX suggestions.
Build 27938 is not a single “big” feature. It’s a clear statement about where Microsoft wants Windows to go: AI integrated into the places you already work (Explorer + Settings), with incremental controls and visibility for users and admins. For power users and admins this is an ideal time to stress‑test the model: validate performance and privacy behavior, record what the Recent activity UI surfaces, and prepare policy responses for a future where these OS‑level AI hooks become common.
If you’re tempted to try it: be deliberate. Canary is for experimentation and feedback, not production. Test on a spare machine or VM, back up first, and use the Recent activity view and Feedback Hub to hold Microsoft accountable for data handling and manageability. If you want, post your hands‑on results here at WindowsForum and we’ll compare notes—speed, accuracy, whether actions run locally or send data to the cloud, and how well the UI signals that difference.
Appendix — quick reference (where to look in Settings)
- Enable Notification Center seconds clock: Settings → Time & language → Date & time → Show time in the Notification Center (toggle).
- Inspect generative AI activity: Settings → Privacy & security → Text and image generation → Recent activity (lists apps that called the platform in the last 7 days).
- To test AI actions: right‑click a .jpg/.jpeg/.png in File Explorer; look for “AI actions” or similar entry in the context menu.
Source: Neowin Microsoft tests new Windows 11 File Explorer features, popular clock feature in build 27938
Last edited by a moderator: