Microsoft has quietly changed how Windows 11 indexes files so that File Explorer no longer re-processes the same paths repeatedly — a tidy under‑the‑hood tweak that Microsoft says will make Explorer searches faster and use less RAM, and which first appeared in Windows 11 Insider Preview Build 26220.7523 (KB5072043) for the Dev and Beta channels.
Background
File Explorer search has been one of those Windows features that mostly works but occasionally creates friction: long indexing sessions, transient spikes in memory and disk activity, and sluggish search results on systems with complex folder layouts or many drives. The behavior is rooted in how the
Windows Search indexer schedules and executes indexing tasks — historically, identical work items could be enqueued or processed multiple times, particularly when files are reachable via more than one logical path (for example, junctions, mounted volumes, OneDrive placeholders, or multiple access points). Microsoft’s recent Insider release notes confirm a targeted change: the indexer now deduplicates those work items to avoid redundant processing. This change is being rolled out as an experiment behind a staged toggle for Insiders in the Dev and Beta channels before Microsoft enables it by default in broader builds. That staged approach lets the company collect telemetry and user feedback while limiting possible regressions as the change touches a widely used system service.
What exactly changed in Build 26220.7523
The release note, in plain language
Microsoft’s official release notes for
Windows 11 Insider Preview Build 26220.7523 include a concise entry: “Made some improvements to File Explorer search performance by eliminating duplicate file indexing operations, which should result in faster searches and reduced system resource usage during file operations.” That single sentence sums up the engineering goal: do less duplicate work so searches are faster and lighter.
How the indexer behaved before
- The indexer could enqueue the same physical file or path multiple times when that file was reachable from several logical access points.
- Cloud‑placeholder systems (like Files On‑Demand), transient mounts, symbolic links/junctions, mapped network drives and simultaneous update requests from different subsystems could cause overlapping index jobs.
- The result: redundant reads, extra threads, more memory allocations and I/O peaks that made Explorer and the Search service appear heavy or unresponsive during wide indexing or broad search queries.
What the deduplication change does
- The indexer now attempts to canonicalize or coalesce identical work items so that a single physical file object is not processed more than once concurrently.
- Practically, this reduces redundant disk reads, cuts background thread churn, and lowers transient RAM consumption and CPU cycles while indexing or answering search queries.
- Because the change is inside the system indexer, File Explorer continues to query the same index — it simply benefits from a more efficient backend.
Why this matters: performance and user scenarios
Faster, smoother searching on complex setups
Users who frequently search across multiple folders, external drives, or cloud‑backed storage are most likely to feel the difference. When the indexer avoids processing the same object multiple times, search queries that previously triggered redundant work can complete sooner and with lower memory overhead. Early reporting and community testing found noticeable improvements for those scenarios.
Less transient RAM usage, especially during indexing spikes
The Search service and indexer historically generated complaints when they consumed large amounts of RAM during heavy indexing operations. By trimming duplicate tasks, the system reduces peak transient allocations. That doesn’t mean the indexer will never use meaningful memory — large indexes still need memory — but overall transient spikes should be less frequent on systems that previously suffered from duplicated indexing work.
Broader system responsiveness and battery life
Less redundant disk I/O and fewer CPU cycles for duplicate work can translate into small but meaningful gains in responsiveness and battery life, particularly on laptops and resource‑constrained devices. The benefit is incremental: each avoided redundant operation is a tiny win, and those wins add up during heavy or repeated searches.
Technical deep dive: how duplicate indexing happens, and why deduplication helps
Common triggers for duplicate indexing
- Multiple logical paths: A single file can be reachable via a mount point, a junction, or a mapped network share; the indexer might treat those as separate targets unless canonicalization occurs.
- Cloud placeholders and Files On‑Demand: OneDrive and similar systems create virtual entries that can be realized as files at different times, confusing index heuristics and causing multiple indexing attempts.
- Concurrent update requests: Several subsystems (file system watchers, backup agents, or third‑party tools) might request index updates at the same time, resulting in repeated work.
- Transient mounts and removable media: Re‑enumeration of drives or re‑connections can provoke repeated index operations for the same content.
What deduplication must do well
To remove duplicates reliably, the indexer needs to:
- Correctly identify the same physical object even when accessed through different logical paths.
- Avoid false positives (treating distinct content as identical) to prevent missed indexing updates.
- Handle cloud placeholder state transitions (online vs. offline) without skipping fresh content.
- Maintain stability and rollback options during staged rollouts.
If any of these are mishandled, deduplication could cause incomplete indexes or stale search results — which is why Microsoft’s rollout is staged and telemetry‑driven.
Verification and independent coverage
This change is documented in Microsoft’s Windows Insider release notes and has been picked up by multiple independent outlets and community testing threads. Microsoft’s blog post listing Build 26220.7523 names the improvement directly, and outlets such as Windows Latest, Igor’s Lab, and community forums have corroborated and explained the practical effect. Those reports consistently describe the change as an indexer‑side optimization — not a new search engine inside File Explorer. A few outlets and community testers have suggested tangible performance gains; one report referenced roughly twice‑as‑fast results in early tests, but that figure is based on limited experiments and should be treated as anecdotal until broader, reproducible benchmarks appear. Where specific performance numbers are cited, they should be considered provisional and not definitive.
Rollout strategy: how to try it now and what to expect
Who sees it today
- The deduplication experiment is available to Windows Insiders in the Dev and Beta channels running Windows 11, version 25H2, via Build 26220.7523 (KB5072043).
How insiders can opt in
- Open Settings > Windows Update.
- Turn on the toggle for “Get the latest updates as soon as they’re available” or use the staged toggle offered in the build (Insider settings may vary).
- Install the build and allow telemetry/feature flags to enable the deduplication experiment for your machine.
Because this is a controlled experiment, not every Insider will see the change immediately; Microsoft will enable the feature for subsets of devices and collect telemetry before a full rollout.
When it will reach the stable channel
Microsoft has not provided a specific ship date for enabling the feature by default, but the staged testing suggests a broader deployment will follow successful telemetry and feedback. Independent outlets speculated on a general availability window in the weeks following the Insider flights, but those timelines can slip depending on results. Treat GA timing as tentative until Microsoft announces it for stable channels.
Potential benefits — who wins most
- Power users who search across multiple drives or complex directory trees will likely notice the largest improvements.
- Laptops and resource‑constrained devices may benefit through reduced spikes in RAM and disk usage during indexing.
- Enterprise environments with mounted network volumes, many junctions, or cloud placeholders could see reduced indexer noise and fewer helpdesk reports about Search-related slowdowns.
- Third‑party apps that call into Windows Search indirectly benefit, because they all query the same underlying index; improvements are therefore system‑wide rather than Explorer‑only.
Risks, caveats and what to watch for
1. Risk of missed or stale results (low but real)
Deduplication must accurately detect identical files; improper canonicalization could cause a file reachable via a non-standard path to be skipped or treated as already indexed. Microsoft’s staged rollout and telemetry collection are explicitly designed to catch such edge cases before broad deployment. Users who rely on exact, real‑time search results (for example, indexing archives or frequently updated network shares) should be cautious during the experimental phase.
2. Regressions and compatibility with third‑party tools
Third‑party file managers, backup utilities, or search replacements that interoperate with the Windows Search indexer could reveal unexpected behavior if they rely on previous index semantics. Administrators should monitor feedback and run controlled tests in enterprise environments before rolling the update out widely.
3. No silver bullet for large indexes
Deduplication reduces redundant work, but it does not eliminate the base cost of maintaining a large index. Devices with very large libraries (hundreds of thousands of files) may still experience heavy indexing loads; deduplication simply reduces unnecessary duplication on top of that. Expect incremental improvements, not a complete overhaul of indexing economics.
4. Performance claims need independent validation
Some outlets and posts have reported significant speedups (one reported “roughly twice as fast” in a small test), but those numbers are not yet validated at scale. Treat early performance figures as signals to guide testing rather than definitive proof. Independent, reproducible benchmarks are essential before making claims about universal speed improvements.
Practical tips for Windows users and admins
For Windows Insiders
- Enable the “Get the latest updates as soon as they’re available” toggle if you want to try the indexed deduplication experiment sooner, but back up important data and be prepared to provide Feedback Hub reports if you encounter issues.
- Test search scenarios that reflect your typical workflow: multi‑drive searches, OneDrive on‑demand files, and shares with junctions or mounts.
- If you detect missing or stale search results, collect logs and submit Feedback Hub entries so Microsoft can investigate.
For IT admins
- Pilot the update in a controlled group before wide deployment.
- Validate searches across network shares, mapped drives, and cloud‑synced repositories.
- Monitor feedback channels and telemetry for any uptick in search‑related helpdesk tickets.
For regular users
- Expect modest improvements in search speed and occasional reductions in memory spikes for systems that previously suffered from duplicate indexing.
- No manual action is necessary once the change is enabled by Microsoft for your device — but Insiders can opt in to try it earlier.
How this fits into Microsoft’s broader approach
Microsoft’s decision to implement deduplication inside the indexer rather than rewrite Explorer’s search experience is characteristic of a pragmatic, low‑surface‑area engineering approach. The company focuses on targeted optimizations that reduce resource waste and are easy to validate via telemetry. Because the change is indexer‑level, other features that depend on Windows Search (Start menu search, Ask Copilot integration, enterprise search tools) stand to benefit indirectly — a small systemic improvement that pays dividends across multiple Windows features.
Final analysis: practical impact and realistic expectations
This is a meaningful but incremental improvement. The change does not reinvent Windows search or introduce a new indexing engine; instead, it cleans up inefficiencies in the existing pipeline. That means the feature is unlikely to produce headline‑grabbing speedups for every user, but it will measurably improve everyday experiences for those who hit the particular pain points caused by duplicate indexing.
For users and admins, the most important takeaway is to treat the change as a positive systems optimization that reduces unnecessary resource usage. For testers and power users, the staged rollout is an opportunity to validate the behavior against specific, real‑world search patterns. For everyone else, expect the improvement to arrive silently in future updates once Microsoft is confident the experiment behaves at scale.
Conclusion
Microsoft’s deduplication tweak in
Windows 11 Insider Preview Build 26220.7523 is an engineer‑level fix with user‑facing benefits: fewer redundant indexing operations, lower transient RAM usage, and faster searches in scenarios that previously triggered duplicated work. The approach is cautious — staged toggles, telemetry, and Insider testing — which is appropriate for a change touching the Windows Search indexer. While dramatic, universal improvements should not be expected overnight, the update represents thoughtful housekeeping that reduces waste and improves responsiveness where it matters most. Users on the Dev or Beta channels can opt in to test the feature now; everyone else will see it as Microsoft rolls the optimized indexer into broader builds after validation.
Source: hi-Tech.ua
Windows 11 will use less RAM for searching in Explorer