Microsoft’s storage stack just got a major rewrite in server builds — and an unsupported registry trick is letting enthusiasts flip the switch on Windows 11 to get dramatic NVMe gains, but not without real risk.
For years Windows presented a uniform storage front to the OS and drivers: everything went through a SCSI-compatible abstraction. That made life simpler for compatibility — one canonical path for spinning disks, SATA SSDs, and NVMe SSDs — but it also papered over the architectural differences that make NVMe fast.
NVMe was designed for parallelism. The specification allows tens of thousands of submission/completion queue pairs and very deep queue depths per queue; NVMe controllers and modern SSD firmware exploit multiple queues and multi-core hosts to achieve very high IOPS and low latency. SCSI-style translation forces that multi-queue design through a single-queue, serialized path; the result is extra CPU cost, locking and missed throughput on small, highly concurrent I/O workloads.
On December 15, 2025, Microsoft published a formal announcement that Windows Server 2025 now includes a new “native NVMe” storage path — an opt-in kernel path that removes the legacy SCSI translation layer and exposes NVMe semantics to the kernel directly. Microsoft’s server-side tests showed big improvements on synthetic 4K random workloads (the company reported up to ~80% more IOPS and roughly 45% fewer CPU cycles per I/O in their lab runs). Those numbers were measured on server-class hardware with enterprise NVMe devices and high concurrency; they represent best-case gains for heavy multi-threaded server workloads.
The code for this native NVMe path ships inside recent Microsoft servicing updates for server and appears in client builds too, but Microsoft did not enable it by default on consumer Windows 11 — primarily because of ecosystem compatibility issues. That’s where the community-discovered registry overrides enter the story.
Enthusiasts testing on Windows 11 client builds discovered that a different set of numeric FeatureManagement override values — three DWORDs under the same registry key — can flip many Windows 11 systems to the native NVMe path. The community commands commonly used (run as Administrator) are:
reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 735209102 /t REG_DWORD /d 1 /f
reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 1853569164 /t REG_DWORD /d 1 /f
reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 156965516 /t REG_DWORD /d 1 /f
After a reboot, some systems show the change visually in Device Manager: NVMe devices move from the traditional Disk Drives / Devices category into Storage Disks / Storage Media, and the in-box Microsoft NVMe driver is used instead of the SCSI translation path. Depending on builds, you may see a driver file name such as StorNVMe.sys or nvmedisk.sys loaded for the device; driver naming and exact component layout have varied between server and client builds and across servicing releases.
Important: the three-client-DWORD method is community-discovered and undocumented for consumer Windows; Microsoft’s official server toggle uses a different single numeric ID. That distinction matters for support and future stability.
Consumer reality is more mixed:
Source: MakeUseOf A risky Windows registry tweak can make your SSD almost twice as fast
Background: why Windows treated NVMe like an old hard drive
For years Windows presented a uniform storage front to the OS and drivers: everything went through a SCSI-compatible abstraction. That made life simpler for compatibility — one canonical path for spinning disks, SATA SSDs, and NVMe SSDs — but it also papered over the architectural differences that make NVMe fast.NVMe was designed for parallelism. The specification allows tens of thousands of submission/completion queue pairs and very deep queue depths per queue; NVMe controllers and modern SSD firmware exploit multiple queues and multi-core hosts to achieve very high IOPS and low latency. SCSI-style translation forces that multi-queue design through a single-queue, serialized path; the result is extra CPU cost, locking and missed throughput on small, highly concurrent I/O workloads.
On December 15, 2025, Microsoft published a formal announcement that Windows Server 2025 now includes a new “native NVMe” storage path — an opt-in kernel path that removes the legacy SCSI translation layer and exposes NVMe semantics to the kernel directly. Microsoft’s server-side tests showed big improvements on synthetic 4K random workloads (the company reported up to ~80% more IOPS and roughly 45% fewer CPU cycles per I/O in their lab runs). Those numbers were measured on server-class hardware with enterprise NVMe devices and high concurrency; they represent best-case gains for heavy multi-threaded server workloads.
The code for this native NVMe path ships inside recent Microsoft servicing updates for server and appears in client builds too, but Microsoft did not enable it by default on consumer Windows 11 — primarily because of ecosystem compatibility issues. That’s where the community-discovered registry overrides enter the story.
What the registry tweak does (and the exact changes people are using)
The official, supported enablement path Microsoft documents for Windows Server 2025 is a single FeatureManagement override exposed after a recent cumulative update. On servers you can opt in via an official registry DWORD or Group Policy MSI that Microsoft published for administrators.Enthusiasts testing on Windows 11 client builds discovered that a different set of numeric FeatureManagement override values — three DWORDs under the same registry key — can flip many Windows 11 systems to the native NVMe path. The community commands commonly used (run as Administrator) are:
reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 735209102 /t REG_DWORD /d 1 /f
reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 1853569164 /t REG_DWORD /d 1 /f
reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 156965516 /t REG_DWORD /d 1 /f
After a reboot, some systems show the change visually in Device Manager: NVMe devices move from the traditional Disk Drives / Devices category into Storage Disks / Storage Media, and the in-box Microsoft NVMe driver is used instead of the SCSI translation path. Depending on builds, you may see a driver file name such as StorNVMe.sys or nvmedisk.sys loaded for the device; driver naming and exact component layout have varied between server and client builds and across servicing releases.
Important: the three-client-DWORD method is community-discovered and undocumented for consumer Windows; Microsoft’s official server toggle uses a different single numeric ID. That distinction matters for support and future stability.
How much faster will your SSD get? (expectations versus reality)
The headline claims you’ll see “up to 80% more IOPS” come from Microsoft’s server benchmarks. Those tests were conducted on server-grade hardware and enterprise NVMe drives under heavy concurrency and synthetic workloads tuned to stress I/O parallelism. For those environments the newly exposed NVMe multi-queue behavior and lower kernel overhead make a dramatic difference.Consumer reality is more mixed:
- Many community tests on Windows 11 client machines report measurable but smaller gains for everyday workloads — often in the range of single-digit to low double-digit percentage improvements for real-world activities such as build systems, small-file operations, and some application workloads.
- Synthetic microbenchmarks that stress many small concurrent requests (4K random read/write with high thread and queue depth) can show large improvements on compatible drives — in some public tests, certain drives saw very large jumps on random workloads.
- For sequential large-file transfers or basic single-threaded tasks (web browsing, launching a single program, video playback) the difference is typically negligible because those workloads are bounded by raw NAND throughput and PCIe bandwidth rather than driver overhead.
- the SSD model and controller firmware,
- whether the SSD is using the Microsoft in-box driver or a vendor-supplied driver,
- platform features such as VMD/RST/Vendor RAID layers,
- whether your workload exploits parallel small-block I/O.
The risks — why Microsoft left it off for consumer Windows
This registry tweak is not a benign “faster switch.” It changes how Windows presents storage to the system and therefore has implications across boot, recovery, tooling, and encryption. Community testing and Microsoft messaging have surfaced a list of real-world compatibility problems:- Safe Mode and recovery environments can fail. Because Safe Mode enumerates a limited set of driver classes, switching the NVMe presentation can cause Safe Mode to be unable to load the new class and produce an INACCESSIBLE_BOOT_DEVICE or other boot errors. Community workarounds exist (registering the NVMe class GUID under SafeBoot keys), but they are manual and add risk.
- Backup and imaging tools can stop recognizing drives. Switching the driver can change the way a disk is enumerated and may alter device identifiers. Backup solutions that bind to specific disk IDs, or scheduled imaging scripts, may fail to find the expected target.
- Third‑party SSD utilities can misbehave. Vendor tools like Samsung Magician, Western Digital Dashboard, and others have been reported to either not detect drives, detect them twice, or display inconsistent SMART/health information under the new presentation.
- Vendor drivers may not flip. If you use a manufacturer-provided NVMe driver, that driver may keep control of the device stack and not benefit from the Microsoft native path, or it may conflict and produce instability.
- Boot failures on complex controller stacks. Systems that use Intel/AMD VMD, hardware RAID layers, or special chipset controllers can fail to boot or exhibit severe instability if the new path is forced incorrectly.
- Cluster and enterprise features: storage clustering, Storage Spaces Direct (S2D), NVMe-oF, or complex virtualization setups may expose edge cases in failover or resynchronization behavior.
- Data loss risk: in rare cases community reports include systems that required offline recovery or full reimage to repair after forcing the native path on incompatible hardware.
A verified checklist: what to do before thinking about trying this
If you still want to experiment, follow these strict preconditions. They’re non-negotiable.- Create a full, verified image backup of any drive you care about (use a separate external disk). Do not rely on file-level copies only.
- Create a recovery USB drive and confirm you can boot to Windows Recovery Environment and offline tools.
- Test on a spare machine or a non‑critical system first. Do not try this first on your daily-driver laptop or an encrypted system with essential data.
- Confirm your NVMe is using the Microsoft in-box driver today (StorNVMe.sys or nvmedisk.sys); vendor drivers can block or complicate the switch.
- Ensure you have the latest firmware for your SSD and the latest BIOS/UEFI for your motherboard.
- Note whether your system uses Intel/AMD VMD, RAID, or special controller modes — if so, avoid experimenting on that machine.
- Consider running the same synthetic workload (DiskSpd or CrystalDiskMark) before and after to judge real impact for your workloads.
Step-by-step: how the community is enabling the native path (what they run, how they verify, and how to roll back)
Warning: the following is descriptive of community procedures reported in public testing. It is not an endorsement; performing these steps may render your system unbootable.- Backup and recovery
- Create a full image backup and ensure external recovery media is ready.
- (Optional) Note the Microsoft server-supported key for admins
- On server builds Microsoft documented a supported registry override (a single DWORD) to opt in to Native NVMe. That server ID and process is the one Microsoft supports for WS2025.
- Community client toggle (Windows 11 client builds)
- Open an elevated Windows Terminal / Command Prompt and run:
reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 735209102 /t REG_DWORD /d 1 /f
reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 1853569164 /t REG_DWORD /d 1 /f
reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 156965516 /t REG_DWORD /d 1 /f - Reboot.
- Verify the change
- Open Device Manager and check whether NVMe devices are listed under Storage disks / Storage media rather than Disk drives / Devices.
- Inspect driver details for your NVMe controller to see if the in‑box Microsoft NVMe driver (StorNVMe.sys / nvmedisk.sys) is in use.
- Run a preplanned benchmark (DiskSpd / CrystalDiskMark) and your real-world workload to measure before/after differences.
- If Safe Mode is broken (community workaround)
- Community testers found Safe Mode fails on many systems after enabling the native path. A commonly reported fix is to add a SafeBoot whitelist entry for the NVMe class so the driver is allowed in Safe Mode:
reg add "HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot\Network{75416E63-5912-4DFA-AE8F-3EFACCAFFB14}" /ve /d "Storage Disks" /f
reg add "HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot\Minimal{75416E63-5912-4DFA-AE8F-3EFACCAFFB14}" /ve /d "Storage Disks" /f - This is a community workaround; it is not an official Microsoft support article and it may not be stable across builds.
- Rolling back
- To revert the client override, delete or set the three DWORDs to zero under:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides - Reboot and verify Device Manager shows the previous driver/class.
- If the system won’t boot, use your recovery USB and the image backup to restore.
Who should try this, and who should absolutely not
Try this only if:- You have a non‑production test machine or a spare PC and you enjoy experimental tweaks.
- You run I/O‑heavy workloads where microsecond latency and small-block IOPS matter (databases, heavy virtualization, AI/ML scratch volumes) and you have the ability to stage and recover if something goes wrong.
- You can afford the time to verify backups, reconfigure backup/imaging software, and troubleshoot driver/utility issues.
- This is your daily laptop with encrypted volumes or BitLocker and you do not have a tested recovery path.
- You rely on scheduled backups, imaging workflows, or vendor drive utilities for critical operations.
- Your system uses vendor NVMe drivers, VMD/RAID stacks, or you’re on an OEM laptop with specialized storage firmware and you don’t have a spare machine.
- Update SSD firmware and motherboard firmware; many vendors issue firmware that improves small-block performance and behaviour.
- Ensure Windows is using the latest in-box or vendor driver appropriate for your device; vendor drivers may already be optimized.
- Use file-system and Windows tuning that helps performance: enable TRIM, confirm write-caching settings, keep adequate free space, and use a modern filesystem configuration.
- If you need server-class performance, use Windows Server 2025 in supported deployments and follow Microsoft’s documented enablement and testing guidance.
Compatibility checklist — what will most likely break or need attention
- Backup/imaging software: reconfigure jobs that select disks by ID or path.
- Vendor utilities: expect anomalies — reinstall or revert vendor tools if needed.
- Boot and recovery: verify WinRE and Safe Mode; add SafeBoot whitelist only if you understand the mechanism.
- BitLocker: ensure you have the recovery key and test pre-boot unlock after toggling the driver.
- Virtual machines and disk snapshot systems: expect transient failures if device presentation changes.
Technical explanation: why the native path helps (and why it can hurt)
Why it helps:- Native NVMe exposes multiple submission/completion queues and lets the host and controller operate in parallel across CPU cores. That lowers lock contention and reduces per-I/O overhead in the kernel.
- Eliminating redundant translation and serialization reduces CPU cycles per I/O, freeing CPU for workloads and improving tail latency in concurrent I/O situations.
- A change of device presentation is a change of contract — lower-level drivers, vendor utilities, and system components often assume older SCSI semantics. Changing the semantics can reveal latent assumptions and bugs.
- System boot and recovery environments are minimal by design; introducing an unfamiliar class without registering SafeBoot allowances can break those minimal configurations.
- Not every drive or controller firmware exposes multi-queue features equally; switching to native handling can interact poorly with proprietary optimizations.
Measured results and real-world examples (what the headlines miss)
Public coverage of this tweak has included striking benchmark screenshots and “up to X%” claims. Two important context points:- Microsoft’s ~80% IOPS and ~45% CPU-savings figures came from Windows Server 2025 lab testing on enterprise devices with high concurrency and are valid for that class of workload and hardware.
- Consumer testing is heterogeneous. Some drives show large improvements on synthetic multi-threaded random workloads; many consumer systems only see modest real-world gains. In short: the “almost twice as fast” headline is plausible in specific synthetic server tests but is not a universal consumer guarantee.
Practical recommendations and a safe test plan
- Do not enable this on a primary machine without a full disk image and recovery media.
- Use a spare PC or a secondary drive that can be swapped in and out for testing.
- Record pre-change metrics:
- Collect DiskSpd or CrystalDiskMark results.
- Snapshot CPU utilization under your workload.
- Apply the client registry overrides (if you accept the risk) and reboot.
- Re-run the same metrics and compare:
- Look for changes in 4K random IOPS, p99/p999 latency, and CPU cycles per I/O.
- Also run your daily workloads (build times, app startup, game load times) — synthetic gains don’t always translate to real-life improvement.
- Test booting into Safe Mode and WinRE after enabling; if Safe Mode fails, consider the SafeBoot registry whitelist workaround only if you fully understand the implications.
- If anything is broken, revert the DWORDs to zero or delete them and restore from your image backup if needed.
Final judgement: wait, test cautiously, or adopt in production?
- For most consumers: waiting is the prudent choice. Microsoft shipped this feature to servers with an explicit opt-in model for a reason: the ecosystem must catch up. Expect Microsoft to expand official enablement for client Windows 11 once compatibility testing and telemetry mature.
- For enthusiasts and testers: this is an interesting and tangible preview of Windows’ storage future. If you enjoy tinkering, have safe backups and spare hardware, and accept the risk, this is a valid experiment — but treat it like a platform migration, not a tweak.
- For enterprise admins and storage teams: test in lab and follow Microsoft’s server guidance. If you run storage-intensive workloads where lower latency and higher IOPS directly affect SLA or throughput, plan a staged roll-out with thorough validation and vendor coordination.
Source: MakeUseOf A risky Windows registry tweak can make your SSD almost twice as fast