• Thread Author
Microsoft’s latest Insider whispers fold AI deeper into the Windows shell: right‑click a picture in File Explorer and you may now see an “AI actions” submenu offering Bing Visual Search, background blur, object erase, and background removal — a small set of micro‑workflows that signal a broader shift toward surfaces-first AI in Windows. (blogs.windows.com)

Mac-style desktop with floating AI editing panels over a blue abstract wallpaper.Background​

Microsoft has been steadily embedding generative and assistive AI across Windows for more than a year, moving beyond a single Copilot app and instead surfacing intelligence where users already work. File Explorer — the central hub for discovery and file management in Windows — is the logical next place to expose quick AI affordances, because it reduces context switching for routine tasks like quick edits, visual lookup, and content summarization. Early developer and Insider flights have experimented with similar concepts in Dev and Beta channels; the latest reports tie these context‑menu actions to recent Canary‑channel activity. (blogs.windows.com, windowscentral.com)
Microsoft’s official Insider documentation first described the File Explorer AI actions in Dev‑channel notes that outline four image actions (Bing Visual Search, Blur Background, Erase Objects, Remove Background) and a future roadmap to extend actions to Microsoft 365 files (Summarize, Create an FAQ) for Copilot‑licensed tenants. Independent coverage from major outlets confirms the UX and the intent behind the feature: make AI a series of one‑click workflows that keep users “in flow” inside the shell. (blogs.windows.com, theverge.com)

What’s actually new (practical summary)​

  • A new AI actions entry appears in the File Explorer right‑click (context) menu for supported image files. (blogs.windows.com)
  • Initial image actions (at time of reporting):
  • Bing Visual Search — use an image as the query to find similar images, shopping results, landmarks, and extract text.
  • Blur Background — launches Photos to blur background automatically (with intensity/brush controls).
  • Erase Objects — invokes Photos’ generative erase to remove selected elements.
  • Remove Background — uses Paint’s automatic background removal for a one‑click subject cutout.
    These actions are currently reported to support .jpg, .jpeg, and .png files. (blogs.windows.com, windowscentral.com)
  • Microsoft plans to add document‑level AI actions (Summarize / Create an FAQ) for Microsoft 365 files; initial availability is being gated by Copilot/Microsoft 365 licensing, Entra IDs, and Beta/Microsoft 365 Insider enrollment. Supported document filetypes were listed by Microsoft for those future actions. (blogs.windows.com)
  • A small UX restoration: a toggle to show a bigger clock with seconds in the Notification Center (Settings > Time & language > Date & time > Show time in the Notification Center) is rolling out to Insiders.
Note: community reporting has associated this cluster of features with a Canary build labelled Build 27938, but Canary flight numbers and feature gating can be inconsistent; treat a specific numeric tag as community‑reported until confirmed by Microsoft’s Flight Hub or an Insider Blog post.

How AI actions work in practice​

Surface and flow​

AI actions are a shell hook: a right‑click on a supported image can either:
  • launch a first‑party app (Photos or Paint) with the edit already staged, or
  • call a platform AI API to run a quick transformation or a visual lookup and then present the result or open the relevant app.
Because the actions reuse Photos, Paint, and Bing Visual Search, the exact behavior depends on the app versions installed and whether the OS chooses local or cloud processing for a given operation. Microsoft’s hybrid model means some processing may run locally on Copilot+ hardware while other workloads may fall back to cloud endpoints; Microsoft has not publicly published a comprehensive per‑action locality guarantee for every scenario. This ambiguity is important for privacy and performance expectations. (blogs.windows.com, pureinfotech.com)

Supported file types and early limitations​

  • Image actions at introduction: .jpg, .jpeg, .png. RAW, PSD, and many professional formats are not reliably supported in the quick‑action flows yet; professional workflows still require full editors. (windowscentral.com, pureinfotech.com)
  • Document actions (Summarize/Create an FAQ) will target Microsoft 365 files first and have a broader filetype list (docx, pptx, xlsx, pdf, txt, rtf, aspx/html variants). Availability is being staged and will initially require Copilot subscriptions for some features. (blogs.windows.com)

Enabling the feature (Insider testing)​

Microsoft is rolling AI actions out via the Insider channels and server‑side feature gating, so not every Insider will see the menu immediately. Community researchers have published ViVeTool IDs that can toggle the feature, but using them is unofficial and can be risky on production machines. If you choose to experiment, do so on test hardware or a VM and follow documented ViveTool guides closely. (windowscentral.com, neowin.net)

Cross‑checks and verification (what’s confirmed vs. what’s tentative)​

Confirmed by Microsoft and independent outlets:
  • The concept and implementation of the AI actions entry in File Explorer and the four initial image edits are documented in Microsoft Insider notes and confirmed by major tech outlets. (blogs.windows.com, theverge.com)
  • Supported image file types at launch are JPEG/JPG and PNG. (pureinfotech.com, windowscentral.com)
  • Document summarization and other Office‑centric actions are planned with licensing restrictions (Copilot/Microsoft 365) and a broader supported filetype set for those features. (blogs.windows.com)
Tentative or community‑reported items that need caution:
  • The specific association of these menu changes with Build 27938 (Canary) is community reporting and forum/press aggregation; it is reasonable to say the behavior has appeared in Canary‑adjacent flights, but Flight Hub/Insider Blog entries for every Canary label can lag and Canary flights are heavily server‑gated. Treat build number claims as reported by the community and verify against Microsoft’s official Flight Hub if you require absolute certainty.
  • Which AI actions run locally versus in the cloud depends on hardware, OS configuration, and Microsoft’s runtime decisions; Microsoft hasn’t published a full per‑action locality decision tree for the preview, so assume a hybrid model until Microsoft provides explicit guarantees. (blogs.windows.com)

Why this matters — strategic and practical implications​

For everyday users​

  • Fewer context switches: One‑click edits and visual lookups reduce friction for quick tasks like stripping backgrounds for thumbnails, removing distractions, or doing research from a screenshot.
  • Lowered skill barrier: Casual creators and non‑designers get access to features that previously required an editor like Photoshop, directly from the file system.

For power users and creatives​

  • Speed for micro‑tasks: The flows are optimized for rapid iteration, not heavy retouching or multi‑layer edits. Professionals working with RAW/PSD or high‑res imagery will continue to rely on dedicated tools.
  • Discoverability vs clutter: Adding AI items to the right‑click menu improves discoverability for less technical users but risks menu clutter for power users who prefer a lean context menu. Microsoft will need to provide customization or a hide option to avoid UX friction. (windowscentral.com)

For IT and enterprise​

  • Governance and visibility: Microsoft is adding a Settings surface (Privacy & security > Text and image generation) to show recent app activity using Windows‑provided generative models — a first step for visibility and policy management. Administrators will want more granular controls (per‑action, per‑app, audit logs, network egress policies) to manage data flow and minimize exfiltration risk.
  • Licensing fragmentation: Document‑level AI actions are being gated by Microsoft 365/Copilot licensing in initial rollouts; organizations must plan entitlements and user expectations accordingly. (blogs.windows.com)

Privacy, security, and governance: a careful look​

  • Data locality ambiguity: Microsoft’s hybrid execution model means some operations could be handled locally on Copilot+ hardware while others use cloud services. Until Microsoft publishes explicit, per‑action locality guarantees, assume that visual search and some Copilot document analyses may upload content to remote services. Treat sensitive files as presumptively not private unless you confirm local‑only processing. (blogs.windows.com)
  • Audit and egress risk: Any OS surface that simplifies uploading file content increases the potential attack surface for data exfiltration if a machine is compromised. Enterprises should:
  • Inventory devices with AI features enabled.
  • Enforce strict network egress rules and DLP where appropriate.
  • Monitor telemetry for anomalous bulk uploads or unusual Copilot API usage.
  • Least privilege and MDM: The new Settings page gives visibility, but IT needs Group Policy and MDM controls that can block or throttle per‑action behavior. Expect Microsoft to evolve GPO/MDM hooks as the features mature — but do not rely on them for immediate enterprise enforcement without testing.
  • User education: Because the right‑click menu is so discoverable, users may inadvertently upload sensitive screenshots or documents. Clear user guidance and internal policy are essential during pilot phases.

Known issues and stability cautions​

Canary and Dev builds are experimental by design. Recent Canary flights have had issues causing install rollbacks and other regressions; Microsoft has acknowledged rollback scenarios tied to 0xC1900101‑style errors in prior Canary builds and continues to investigate. Installers in Canary should expect instability and plan test devices rather than production deployments. (neowin.net, windowsforum.com)
Reporters and Insiders have also documented device‑specific problems (taskbar/graphics glitches, explorer freezes, driver interactions with update paths). Recommended safety precautions:
  • Test on non‑mission‑critical hardware or virtual machines.
  • Create full backups or system images before trying Canary builds.
  • If you require stable operation, wait for features to arrive in Beta/Release Preview or the general channel. (blogs.windows.com)

How to try AI actions (step‑by‑step for Insiders)​

  • Join the Windows Insider Program and enroll the test device in the Dev or Canary channel, depending on the build you want to try. (blogs.windows.com)
  • Update Windows via Windows Update until you reach the latest preview build available to your ring.
  • If the AI actions entry does not appear, expect server‑side gating. Advanced testers may use ViVeTool IDs circulated by the community to toggle features, but this is unofficial and risky on production machines. Typical community IDs have been shared (e.g., the ViVeTool IDs reported by community researchers); use these only on disposable test devices and follow ViVeTool usage guides. (windowscentral.com, neowin.net)
  • Right‑click a supported image (.jpg/.jpeg/.png) in File Explorer and inspect the AI actions submenu. Choose a quick action (Visual Search, Blur Background, Erase Objects, Remove Background) and observe whether the edit is staged in Photos/Paint or returned inline. (blogs.windows.com)
Numbered tips for safe experimentation:
  • Use a VM or spare laptop so you can roll back without impacting daily work.
  • Try actions only on non‑sensitive images until you understand where processing occurs.
  • Keep Photos and Paint app updates current (Store updates can change behavior).

Strengths and weaknesses — a balanced assessment​

Strengths​

  • High productivity upside: Small edits and lookups are frequent tasks; shaving seconds off each action compounds to measurable time savings across a day. (pureinfotech.com)
  • Tactical integration: By routing actions through existing first‑party apps (Photos, Paint) Microsoft reduces engineering duplication and leverages capabilities already in the ecosystem. (blogs.windows.com)
  • Governance visibility: Adding a Settings surface to show app usage of OS generative models is the right first move toward enterprise manageability.

Weaknesses and risks​

  • Privacy opacity: It’s not always explicit which actions run locally vs. in the cloud; that gap undermines trust for sensitive workflows. (blogs.windows.com)
  • Menu clutter: The right‑click menu is a prime UX real estate; without customization, AI entries could annoy power users. (windowscentral.com)
  • Licensing complexity: Splitting Office‑level actions by Copilot/Microsoft 365 entitlements creates mixed experiences across personal and enterprise devices. (blogs.windows.com)
  • Canary instability: Experimental channel rollouts can cause rollbacks and driver regressions; Insiders must tread carefully. (neowin.net)

Recommendations for different audiences​

  • For casual Insiders: Try AI actions on a secondary device and test image edits to understand performance and quality. Report usability and privacy concerns through Feedback Hub.
  • For Windows power users: Watch for customization options or registry keys to hide context‑menu items; weigh convenience vs. context‑menu bloat and request controls via Feedback Hub.
  • For IT admins: Pilot with a small user subset, inventory which endpoints show the feature, and coordinate with security teams on DLP/egress rules. Do not enable Canary builds on production devices.
  • For developers: If your app handles images or documents, validate how Explorer’s AI actions interact with your file formats and whether metadata (EXIF, DRM) is preserved or stripped during quick edits.

The long view: what this reveals about Windows’ AI roadmap​

Surface‑level AI actions in File Explorer are part of a larger product thesis: make AI invisible-in-plain-sight by embedding it where users work, rather than confining it to a single app. That strategy reduces friction and encourages mass adoption, but it requires careful work on privacy, licensing, enterprise controls, and discoverability to avoid backlash. Microsoft’s incremental approach — ship plumbing in a Canary build, gate features server‑side, expand into Dev/Beta, and then to general users — allows rapid iteration but forces early testers to accept instability and incomplete documentation. The real inflection point will be when Microsoft clarifies local vs cloud execution, publishes enterprise management hooks, and rationalizes licensing so consumers and businesses can predict cost and behavior reliably. (blogs.windows.com, theverge.com)

Final take​

AI actions in File Explorer are a pragmatic, high‑leverage experiment: small, discoverable workflows that speed up everyday image tasks and position Windows as an OS that weaves AI into the fabric of daily work. The immediate benefits are clear for routine edits and visual lookups; the unresolved issues are also clear — data locality, governance, licensing, and the instability inherent in Canary flights. Insiders and IT teams should treat this as a preview of a direction, not a production feature yet: test cautiously, insist on stronger transparency for data flows, and push for admin controls before broad deployment.
If your priority is stability and predictable governance, wait until these actions reach Beta or Release Preview with explicit documentation. If you want to play with the future of file‑level AI now, use a test device, keep backups, and file feedback — Microsoft is actively listening and iterating during these early flights. (windowscentral.com)

Source: thewincentral.com Windows 11 Insider Preview Build 27938 (Canary Channel)
 

Microsoft is quietly folding AI-powered image editing and Bing visual search directly into the File Explorer right‑click menu as part of Insider testing, a move that turns the once‑simple file manager into a micro‑workflow gateway for quick edits and visual lookups. (blogs.windows.com)

A futuristic glass smartphone displays a holographic 'All actions' menu with blur/erase/remove background options.Background​

Microsoft’s long-running effort to surface AI where users already work has accelerated across Windows in 2024–2025, moving beyond a single Copilot app to a set of contextual tools embedded throughout the shell. The company has been trialing “AI actions” — right‑click shortcuts that launch small, focused tasks such as visual search or single‑click photo edits — in Windows Insider builds and rolling related Photos and Paint enhancements through the Microsoft Store. Official Insider blog posts describe the concept and the initial image actions, while hands‑on coverage from major tech outlets confirms the basic UX and goals. (blogs.windows.com, theverge.com)
Microsoft presents these AI actions as quick micro‑workflows rather than full photo editors: File Explorer becomes the discovery surface and launcher, while Photos, Paint, or Bing handle the actual processing. That design lets Microsoft add capability without massively increasing Explorer’s code surface, but it also means the experience depends on the versions and permissions of the backing apps.

What Microsoft is testing in File Explorer right now​

The new right‑click “AI actions” entry​

  • A new AI actions submenu appears in the File Explorer context menu when you right‑click supported image files. (blogs.windows.com)
  • Initial image actions being tested include:
  • Bing Visual Search — use the selected image as the search query to find visually similar images, related pages, shopping results, landmarks, and extractable text. (blogs.windows.com, theverge.com)
  • Blur Background — launches Photos and applies a background blur around a detected subject; controls include blur intensity and a brush to refine the mask. (blogs.windows.com)
  • Erase Objects — invokes Photos’ generative erase flow to remove distracting elements. (blogs.windows.com)
  • Remove Background — routes the image to Paint’s automatic background removal to produce a quick subject cutout. (blogs.windows.com)
These actions currently show up for common raster formats only — .jpg, .jpeg, and .png — and Microsoft notes that document‑level actions (Summarize, Create an FAQ) are planned next for Microsoft 365 files but will be gated by Copilot/Microsoft 365 entitlements initially. (blogs.windows.com, pureinfotech.com)

How the plumbing works (what actually runs where)​

  • The right‑click entry is a shell hook: Explorer either launches the associated app (Photos or Paint) with the edit staged, or calls a platform API that performs a quick model‑driven operation and returns the result.
  • Execution can be local or cloud depending on hardware, licensing, and the specific operation. On Copilot+ certified PCs (with NPUs), on‑device inference is possible; other devices may fall back to cloud processing. Microsoft’s messaging and community tests indicate this hybrid approach, although Microsoft has not published an exhaustive per‑action locality guarantee. (blogs.windows.com)
  • Because the actions reuse Photos, Paint, and Bing, the exact behavior you see depends on the installed app versions and the OS decision to run locally or in the cloud. That means results and privacy implications can vary across devices.

What’s new in Settings: visibility and control for on‑device generative AI​

Microsoft has introduced a new Settings surface under Privacy & security > Text and image generation that lists apps which recently used Windows‑provided generative models and provides per‑app toggles to allow or block access to those on‑device models. This is a first step toward transparency and control for platform‑level generative APIs. (blogs.windows.com)
Key characteristics of this settings surface reported in Insider previews include:
  • A Recent activity view that shows which apps invoked Windows’ generative AI in the last several days.
  • Per‑app toggles to block/allow access to the OS generative surface.
  • Indications that enterprise controls (MDM / Group Policy) are present or planned to let administrators govern access at scale.
This control layer is important because raising AI to the platform level changes who can call a model and under what constraints: app developers no longer need to bundle or call their own model hosts when they can invoke Windows’ APIs — and that raises operational, privacy, and audit questions that Microsoft is beginning to address.

Why this matters: productivity wins and UX rationale​

Microsoft’s strategy is to reduce context switching and make common micro‑tasks faster by surfacing actions in Explorer — the place most Windows users start when dealing with files. The expected benefits include:
  • Faster one‑click edits for social thumbnails, screenshots, or quick marketing assets.
  • Rapid privacy scrubbing (erase license plates or faces) before sharing images.
  • Immediate visual research from screenshots via Bing Visual Search without manually uploading files to a browser.
  • Unified access to AI features across the OS — Explorer, Photos, Paint, and Copilot become consistent entry points for small tasks. (blogs.windows.com, theverge.com)
Designers and photographers will still rely on full editors for complex, non‑destructive workflows, but for the majority of everyday users, these micro‑workflows promise genuine time savings.

Cross‑checking the claims: what’s confirmed and what’s community‑reported​

Multiple independent sources corroborate the existence and behavior of the new AI actions. Microsoft’s own Windows Insider posts describe the File Explorer AI actions and the initial image set, while reputable outlets (The Verge, LaptopMag, PureInfotech) and multiple community hands‑on reports relay the same set of features and limitations. (blogs.windows.com, theverge.com, pureinfotech.com)
At the same time, some details circulating in community threads — notably the exact numeric build associated with the earliest sightings (reported around Build 27938) — are community‑reported and may not appear immediately in Microsoft’s Flight Hub or official blog updates. Flight Hub and official Insider blog posts remain the authoritative sources for build lists; community screenshots and walkthroughs can be accurate, but they are subject to server‑side gating and staged rollouts that create inconsistencies across devices. Treat specific build numbers reported in the wild as provisional until confirmed by Flight Hub or an official blog post. (learn.microsoft.com)

Privacy and security implications — what to watch​

Embedding visual search and generative edits at the OS level brings real convenience — and real questions. Key risks:
  • Unclear locality and telemetry: Some actions may be processed in the cloud. If a photo contains sensitive content, an invisible upload could expose it. Microsoft’s hybrid model and server‑side gating mean you may not always know whether inference was local or sent to Microsoft servers.
  • Ease of accidental sharing: Right‑click workflows simplify tasks — but they also reduce friction for uploads. Users might perform a visual search or a Copilot query without fully realizing that the file left their device.
  • App attribution and auditability: The new Settings surface lists app activity, but visibility is not a full audit trail. For enterprise compliance you’ll want server logs, retention policies, and group policy controls to be clear and enforced.
  • Licensing and access gating: Document‑level AI actions (Summarize, Create an FAQ) are being limited initially to Microsoft 365 commercial customers with Copilot entitlements; misalignment across tenants could create unexpected behavior in mixed environments. (blogs.windows.com)
  • UI bloat and discoverability: Adding AI actions to the right‑click menu increases surface area; users and admins may want an easy way to hide or control the menu to prevent accidental use or simply to reduce clutter. Community guides already discuss ways to remove entries via registry edits or feature flags.
Because these features are being trialed in Insider rings, they provide a critical window for security and privacy teams to assess operational risk before general availability.

Enterprise considerations​

  • Governance: Admins should evaluate how the Text and image generation setting behaves with MDM and Group Policy, and whether organization‑wide deny/allow rules are enforced. Initial documentation indicates such controls are planned.
  • Data residency and compliance: Organizations in regulated industries should ask whether Explorer‑triggered visual searches or Copilot queries are routed through Microsoft cloud services and whether that data is retained or processed in a compliant region. Microsoft’s hybrid processing model complicates these assessments.
  • Licensing: The Summarize/Create an FAQ actions for document files are being gated by Microsoft 365/Copilot licensing for commercial tenants. IT should confirm entitlements and plan user expectations accordingly. (blogs.windows.com)
  • Change management: Because features can be server‑gated, staged, or toggled, enterprise pilots should run controlled tests on a defined set of machines to observe behavior before a broad rollout.

Practical guidance for Insiders and power users​

Below are practical steps and tips for those testing the feature in Insider builds:
  • To try AI actions: join the Windows Insider Program (Dev or Canary channel where the experiments are surfacing), update to the latest preview build, and right‑click a JPG/PNG in File Explorer to see the AI actions entry. (blogs.windows.com)
  • If AI actions do not appear: features are server‑side gated and may not be enabled on every device even on the same build. Wait for Microsoft’s staged rollout or use community ViveTool tweaks (advanced users only) — community posts have shared ViveTool IDs that have unlocked similar context menu experiments in other flights. Exercise caution: using ViveTool or registry edits can destabilize your system. (pureinfotech.com)
  • To minimize data exposure:
  • Disable Visual Search if you are concerned about cloud uploads.
  • Use the new Settings > Privacy & security > Text and image generation surface to view recent app activity and block apps from using the OS generative surface.
  • For sensitive content, prefer offline, local‑only image editors or disable on‑device generative features through policy where available.
  • To hide the AI actions menu: power users have shared registry and Group Policy approaches in community forums; consult your enterprise security policy and test in a controlled environment before applying at scale.

Implementation details and current limits​

  • Supported image formats for Explorer AI actions at initial rollout: .jpg, .jpeg, .png. Professional RAW and PSD files are not reliably supported by the quick actions. (blogs.windows.com)
  • Some document actions (Summarize, Create an FAQ) will initially be available only to Microsoft 365 commercial subscribers with Copilot licensing and may require Entra ID sign‑ins. Consumer support is planned later. (blogs.windows.com)
  • Microsoft’s Photos app updates (which supply Blur and Erase flows) and Paint updates (for Remove Background) are part of the feature stack; keeping Store app updates current ensures the Explorer shortcuts behave as intended. (blogs.windows.com)

Strengths — what Microsoft got right​

  • Low friction: Placing micro‑edits and visual lookup in Explorer removes a lot of repetitive clicks for everyday tasks.
  • Leverages existing apps: Rather than re‑implementing features in Explorer, Microsoft reuses Photos and Paint, reducing duplication and making incremental updates easier.
  • Controls and visibility: Adding a Settings surface to show recent generative activity is a sensible first step for transparency and governance.
  • Gradual gating: Using Insider channels and server‑side rollout helps Microsoft iterate and monitor usage patterns before broad deployment.

Risks and potential downsides​

  • Ambiguous data flows: Without a clear per‑action locality guarantee, users and admins cannot always know if an image was processed locally or uploaded — a material privacy risk.
  • Menu clutter and accidental use: Right‑click actions increase the chance of accidental uploads or edits by users who don’t fully read prompts.
  • Fragmented experience: Differences in Photos/Paint versions and server gating lead to inconsistent experiences across users on the same build.
  • Enterprise complexity: Licensing gates for document actions introduce complexity for mixed consumer/enterprise environments and may confuse users when features appear and disappear. (blogs.windows.com)

What to watch next​

  • Official Flight Hub and Windows Insider Blog updates for confirmed build numbers and rollout timelines; community reports have tied these experiments to Canary‑channel flights (community reports mention Build 27938) but those build numbers should be treated as provisional until listed on Flight Hub.
  • Detailed Microsoft documentation that clarifies:
  • per‑action locality (which edits run locally on Copilot+ hardware vs. which use cloud services),
  • data retention and telemetry policies for Visual Search and Copilot invocations,
  • MDM/Group Policy controls for enterprise governance.
  • The consumer timeline for document AI actions and any broad rollouts to the Beta and Release Preview channels. (blogs.windows.com)

Conclusion​

Microsoft’s decision to bring AI image edits and Bing Visual Search into File Explorer signals a deliberate shift: AI is moving from siloed apps into the OS surfaces people use every day. The change offers real productivity benefits for quick edits and visual research, but it also raises important privacy, governance, and user‑experience questions that must be answered before these capabilities reach mainstream Windows users. Insider testing — with staged rollouts, App updates, and a new Privacy & security surface — is the right environment to refine those answers, and organizations should use this preview period to define policies, test behavior, and harden controls against accidental or unwanted data flows. (blogs.windows.com, theverge.com)

Source: Crypto Briefing Microsoft tests AI image editing and Bing search tools in Windows 11 File Explorer
 

Back
Top