• Thread Author
Microsoft’s Windows Server 2025 introduces a long‑awaited, opt‑in native NVMe storage path that bypasses the decades‑old SCSI translation layer — and enterprising users have already found they can force the same native NVMe path onto Windows 11 by toggling the same controls. The change is significant: Microsoft’s engineering and external tests show substantial I/O and CPU efficiency gains on modern NVMe SSDs, but the flip side is a host of compatibility, tooling and support risks that make this a lab‑first, not a one‑click production change.

Blue-lit server rack with NVMe drives and a glowing Windows shield.Background / Overview​

The Windows I/O stack has historically treated block devices through a SCSI‑style abstraction designed in the era of spinning disks and SANs. That SCSI translation layer simplified driver models and compatibility, but it also introduced per‑I/O translation, locking and serialization that increasingly limit the potential of modern NVMe SSDs designed around massive parallelism and per‑core queue affinity.
NVMe’s architecture natively supports very large numbers of submission/completion queues and deep per‑queue depths — the standard permits up to roughly 65,535 I/O queues, each with up to 65,536 entries, a theoretical command space measured in the billions. Exposing those semantics to the OS instead of translating NVMe commands into SCSI semantics is the core of the Server 2025 change. Those design numbers are part of the NVMe specification and explain why native NVMe can unlock a lot more headroom on PCIe Gen‑4/Gen‑5 hardware. Microsoft packaged the native NVMe stack as part of the October servicing wave for Windows Server 2025 (the cumulative update identified with KB5066835), but the feature ships disabled by default and must be intentionally enabled by administrators. Microsoft published a Tech Community post with the supported enablement method and the microbenchmark parameters used in its lab tests so engineers can reproduce results in a controlled environment.

What Microsoft shipped (the essentials)​

  • The deliverable: a new native NVMe I/O path for Windows Server 2025 that avoids per‑I/O SCSI translation and exposes multi‑queue NVMe semantics to the kernel.
  • Delivery model: shipped via the October 2025 cumulative servicing package (KB5066835). The change is available but opt‑in; it requires applying the LCU and enabling a published feature toggle.
  • Proof artifacts: Microsoft published the exact DiskSpd invocation and hardware list used for their synthetic tests so operators can reproduce the microbenchmarks. The company’s lab figures show very large gains on a selected testbed (multi‑socket server, high‑end enterprise NVMe devices).
Why this matters: by aligning the OS path to NVMe semantics the kernel reduces translation overhead and lock contention, which lowers per‑I/O CPU cost and tail latency. That is precisely the gain Linux servers got years ago by treating NVMe natively, and now Windows Server’s storage stack is catching up.

How to enable native NVMe (official, supported method)​

Microsoft published an enablement path that uses a FeatureManagement override in the registry or a Group Policy artifact. The vendor‑documented command to flip the opt‑in toggle is:
  • Install the cumulative update that contains the Native NVMe components (the October servicing LCU that includes KB5066835 or a later servicing bundle).
  • Run (as Administrator) the published command to enable the feature:
    reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 1176759950 /t REG_DWORD /d 1 /f
  • Reboot and verify the device presentation in Device Manager; Microsoft’s guidance says NVMe devices will be visible under “Storage disks” and should use the Windows NVMe driver (StorNVMe.sys) in the path that shows performance improvements.
Microsoft also supplies a Group Policy MSI and guidance for controlled deployment via GPO for managed enterprises; use those artifacts for scripted or fleet rollouts rather than ad‑hoc registry edits. Important operational note: the registry key above is Microsoft’s published, supported toggle for Server 2025; community posts that circulate other undocumented keys or variations should be treated as unverified. The vendor’s recommended, documented approach is the safe starting point.

The claimed performance gains — what’s measured and what that means​

Microsoft’s microbenchmarks used a DiskSpd 4K random read stress harness and a high‑end testbed; the published DiskSpd invocation allows repeatability:
  • diskspd.exe -b4k -r -Su -t8 -L -o32 -W10 -d30
In Microsoft’s lab numbers the native NVMe stack produced headline gains: up to ~80% higher IOPS on selected 4K random read tests and roughly ~45% lower CPU cycles per I/O in those scenarios when compared to the legacy stack. These figures were reproduced by multiple independent outlets and community testers at smaller scales, but results vary widely by device, firmware, driver and workload. Realistic expectations for client desktops and mixed workloads:
  • Enterprise, server‑class NVMe devices on PCIe Gen‑4/Gen‑5 with high concurrency will benefit most — the new stack reduces kernel overhead and improves tail latency.
  • On consumer rigs, independent reports and community tests commonly show single‑digit to double‑digit percent gains (often ~10–15% in throughput or lower tail latencies) on some drives; other drives, especially those using vendor‑supplied drivers already optimized for NVMe, may show negligible change. ComputerBase and other outlets reproduced desktop test results in the 10–15% range for select consumer NVMe SSDs.
Why gains vary:
  • NVMe firmware and controller design determine how much headroom exists beyond the old SCSI‑translation path.
  • Some vendor drivers (e.g., Samsung, Western Digital) implement host‑side optimizations already; if those drivers are active, the delta against Microsoft’s in‑box driver may be small or nonexistent.
  • Benchmarks are sensitive to queue depth, concurrency, file system layout (NTFS vs ReFS), and CPU topology, so synthetic microbenchmarks can overstate what real applications will see.

Risks, compatibility headaches and real‑world caveats​

This is a kernel‑level I/O semantics change. That means the upside can be accompanied by unexpected side effects if hardware, firmware, drivers, or management tooling assume SCSI‑style behavior.
Key issues observed in community testing and Microsoft’s own rollout telemetry:
  • Tooling and inventory systems: Because the storage path and device presentation change, some vendor management, monitoring and disk utilities may not recognize drives properly — they might show devices twice, not at all, or interpret identifiers differently. Several community threads report backup, imaging and drive‑monitoring tools failing to match a new Disk ID after a toggle.
  • Driver interaction: Systems using vendor‑supplied NVMe drivers may see different behavior. Microsoft’s documented gains are primarily when using the in‑box StorNVMe.sys driver; vendor drivers may already provide similar optimizations or conflict with the new path. Validate both driver variants during testing.
  • Disk identity changes: Some users report altered disk IDs or device paths after toggling the feature, which can break licensing or backup solutions that bind to specific disk identifiers. Backups and imaging solutions are particularly sensitive.
  • Clustered storage interactions: For Storage Spaces Direct (S2D), NVMe‑over‑Fabrics and clustered topologies, the timing changes in resync, repair and failover can reveal new edge cases. Microsoft advises exhaustive cluster validation; community posts echo the need for staged rollouts for clustered hosts.
  • Servicing collateral: The native NVMe capability was delivered inside a large LCU (KB5066835) that also introduced unrelated regressions (for example, WinRE USB input problems and HTTP.sys regressions that required out‑of‑band patches). That history underscores the need to validate the entire image post‑update, not just the NVMe behavior.
Bottom line: treat enabling native NVMe as a platform migration, not a micro‑tweak. Test, stage and monitor.

The Windows 11 angle — the registry hack and what consumers should know​

Tom’s Hardware, ComputerBase and several German outlets reported community experiments that used similar registry toggles to enable the native NVMe path on Windows 11 client systems. In practice, enthusiasts found that after applying the appropriate registry values and ensuring a recent servicing baseline, some Windows 11 machines did show throughput and latency improvements — in many cases in the ~10–15% throughput range on PCIe 4.0 consumer drives, though results vary. Important caveats for desktop users:
  • Microsoft’s Tech Community guidance and support artifacts target Windows Server 2025; using the same toggles on Windows 11 is undocumented and not supported as a general client rollout. Proceed at your own risk.
  • Registry changes can alter disk presentation and ID — this can break backup software, imaging workflows, drive‑based licensing schemes and out‑of‑band utilities that match by device ID. Make full image backups and test recovery before applying changes on a primary machine.
  • Some users reported that vendor drive tools stopped recognizing the device or showed it twice, and that reinstallation of vendor drivers or a restore was necessary to return to the prior state.
If you are an enthusiast considering the change on a desktop:
  • Make a full disk image and copy critical files externally.
  • Test the registry toggle in a VM or a spare machine first.
  • Keep a recovery plan (bootable USB, restore images) in case a rollback or full reimage is required.
  • Prefer to run tests with the Microsoft‑published DiskSpd invocation and also with real workloads you care about (games, editors, build systems) to verify meaningful gains.

Practical validation playbook (recommended)​

For IT teams, admins, and power users who want to evaluate the new stack safely:
  • Inventory and baseline:
  • Record NVMe model, firmware, vendor driver, and OS build.
  • Capture baseline metrics: IOPS, average/p99/p999 latency, host CPU utilization, Disk Transfers/sec.
  • Update firmware & drivers:
  • Upgrade NVMe firmware and vendor drivers to vendor‑recommended versions before changing OS behavior.
  • Apply servicing in isolated lab nodes:
  • Install the LCU that contains Native NVMe (the October servicing wave / KB5066835 or later), validate the overall image for unrelated regressions.
  • Enable using the documented toggle (only after lab validation):
  • Use Microsoft’s FeatureManagement override or GPO artifact; avoid undocumented registry hacks.
  • Run synthetic and real workload tests:
  • Reproduce Microsoft’s DiskSpd invocation, then run fio and representative application tests (DB TPC‑like loads, VM boot storms, file server metadata operations). Measure p99/p999 tails and CPU per‑IO.
  • Cluster and replication tests:
  • For S2D and NVMe‑oF, test node loss, resync, live migration and rebuild stress scenarios.
  • Staged rollout:
  • Canary a small set of production hosts, monitor telemetry, then widen rings with rollback windows in place.
  • Monitoring:
  • Add performance counters for Physical Disk, NVMe SMART attributes, OS queue depths and CPU per‑I/O trends.
This is essentially Microsoft’s own recommended validation pattern adapted for enterprise fleets.

Long‑term implications and where the ecosystem goes from here​

Native NVMe in Windows Server 2025 is a strategic modernization: it reduces a long‑standing mismatch between an OS I/O model tuned for legacy block devices and the realities of modern NVMe hardware. The technical benefits are real and measurable: lower per‑I/O CPU cost, higher IOPS potential, and improved tail latency for highly concurrent workloads.
Longer term, expect:
  • Vendors and drive firmware to adapt and tune for the native path, narrowing differences between vendor drivers and the in‑box stack.
  • Storage management and backup vendors to patch their software to handle changes in device presentation and disk ID behavior.
  • Microsoft to consider staged client rollouts only after telemetry stabilizes across a large number of device/firmware combinations.
That said, the change is not free: rolling it out at scale requires coordination with OEMs and vendors, careful regression testing and observability improvements to detect and remediate any incompatibilities.

Editor’s assessment — strengths, risks, and a pragmatic recommendation​

Strengths:
  • Platform modernization: Native NVMe addresses a fundamental architectural mismatch and unlocks substantial headroom for IO‑bound server workloads.
  • Measurable gains: Microsoft’s lab numbers and independent tests agree that well‑matched hardware and drivers can yield double‑digit to multi‑tens‑of‑percent improvements in IOPS and meaningful CPU savings.
  • Future‑proofing: Exposing NVMe semantics natively opens the door to future features (multi‑namespace, vendor extensions, direct submission paths) on Windows.
Risks:
  • Compatibility: Vendor drivers, backup tools, monitoring systems and clustered storage topologies can be affected; expect broken integrations until tooling is updated.
  • Servicing collateral: Large LCUs can introduce unrelated regressions; validate the entire update, not just the NVMe feature.
  • Unsupported client use: Forcing this on Windows 11 is currently community‑led and not an official client rollout — desktop users should treat it as experimental.
Pragmatic recommendation:
  • For enterprises: follow the lab → canary → staged rollout path. Coordinate with NVMe vendors and OEMs, update firmware and drivers, and validate cluster behaviors before broad enablement.
  • For enthusiasts: if you value bleeding‑edge performance and have spare hardware or reliable backups, test in a VM or non‑critical machine. Otherwise, wait for Microsoft or OEMs to formalize client support and tooling updates.

Native NVMe in Windows Server 2025 is a major, overdue step toward matching operating‑system behavior with modern storage hardware. The upside for I/O‑heavy workloads is clear; the operational complexity and compatibility surface area are equally real. Measure before you flip the switch, stage the rollout, and keep full backups handy — the performance prize is significant, but it comes with caveats that demand engineering discipline.
Source: Tom's Hardware https://www.tomshardware.com/softwa...locked-for-consumer-pcs-but-at-your-own-risk/
 

Lenovo’s rumored plan to ship a SteamOS‑powered Legion Go 2 at CES 2026 is a straightforward, high‑stakes gambit: keep the same headline hardware—AMD Ryzen Z2 Extreme, up to 32 GB LPDDR5X, and 1–2 TB PCIe Gen4 storage—swap Windows 11 for Valve’s controller‑first Linux stack, and sell a handheld that promises fewer background headaches, better sustained performance, and potentially a lower price by avoiding a Windows license. This feature examines what the leak actually claims, verifies the hardware and software details against public reporting, explains why the OS matters more on handhelds than it does on desktops or laptops, and weighs the business, technical, and consumer risks. It cross‑checks key specifications and assertions with independent outlets and flags what remains speculative ahead of a CES confirmation.

Lenovo Go 2 handheld gaming console on a blue-lit stage, displaying SteamOS.Background​

The Legion Go family has become Lenovo’s clearest commitment to premium handheld PC gaming: a large, detachable‑controller platform that aims to blend laptop‑class silicon with console‑style ergonomics. The first Legion Go and the later Legion Go S set the stage—one as a powerful Windows handheld, the other as Lenovo’s initial, SteamOS‑preinstalled experiment that rolled out in May 2025. Those moves show Lenovo is willing to ship the same hardware with different OSes depending on the target audience. What’s new is the report that Lenovo might now offer its most powerful model, the Legion Go 2 (a.k.a. Legion Go Gen 2), with SteamOS preinstalled. Multiple leak reports and aggregation pieces have converged on the same core narrative: identical high‑end hardware, different factory image, targeted at users frustrated by Windows on handhelds. That story has been amplified across the tech press and was the subject of the industry leak summarized in the files supplied and recent coverage.

Overview: what the leak actually says​

The leak is concise and consistent across outlets: Lenovo would show a “Powered by SteamOS” Legion Go 2 at CES 2026, using the same internal components as the Windows SKU. The specific, recurring technical claims are:
  • SoC: AMD Ryzen Z2 Extreme (Zen 5 CPU cores, RDNA 3.5‑era iGPU).
  • Memory: up to 32 GB LPDDR5X (high‑speed mobile memory).
  • Storage: 1 TB or 2 TB PCIe Gen4 M.2 (2242) NVMe options, with microSD expansion.
  • Display: 8.8‑inch PureSight / OLED, WUXGA (1920×1200), 144 Hz refresh with VRR.
  • Battery: 74 Wh pack and USB‑C fast charging.
  • Controls: detachable TrueStrike controllers with Hall‑effect sticks and grip buttons.
  • Price expectation: the SteamOS SKU might be cheaper by the approximate cost of a Windows license (roughly ~$100), though specific pricing is unconfirmed.
These hardware points match Lenovo’s publicly released Legion Go 2 specifications for the Windows SKU and independent review sheets, making the hardware‑side claims plausible; the unknown is Lenovo’s formal decision to ship a SteamOS factory image and how Valve will certify or support such a high‑end third‑party device.

Why the OS swap matters on handhelds​

The practical contrast: Windows 11 vs SteamOS on pocket hardware​

Windows 11 is a full desktop OS with many advantages—broad game compatibility, native anti‑cheat support in some cases, and the ability to run non‑game applications. But on handhelds, Windows brings persistent tradeoffs: background services, desktop notifications, and a general UI that’s not optimized for thumbstick navigation. On constrained power and thermals, those overheads translate into less available headroom for the game loop and more complex suspend/resume behavior. Multiple hands‑on reviews and comparisons in 2024–2025 documented tangible runtime and thermal benefits when identical hardware ran a tuned Linux/SteamOS image versus a stock Windows stack.
SteamOS, by contrast, is purpose‑built for controller navigation and handheld workflows. Valve’s stack (Steam client UI + Proton compatibility layer) gives a “pick up, press A, play” experience, plus a coordinated update cadence and compatibility tooling that benefits console‑style devices. For users whose primary need is playing Steam titles, the OS swap reduces friction and often improves sustained framerate and battery life in real workloads. The Verge and other outlets have documented Valve’s effort to extend SteamOS compatibility labels and tooling beyond the Deck, making third‑party SteamOS devices more viable.

The performance delta: what reviewers observed​

Independent comparisons on earlier handhelds showed that a lighter OS can return measurable gains under sustained loads—sometimes in the range of single‑digit to low‑double‑digit percentage points in power‑sensitive workloads, and in specific cases more substantial improvements for heavily CPU‑ or GPU‑bound scenes. These deltas are workload dependent; titles that use less middleware and have straightforward GPU paths benefit most. That said, Windows retains advantages in accessory, peripheral, and software compatibility that SteamOS must address case‑by‑case.

Verifying the headline specs​

Cross‑checking the legion of leak claims against reputable reporting shows concordance on the important numbers:
  • The Ryzen Z2 Extreme as the top‑end option and 32 GB LPDDR5X memory ceiling are listed in multiple hands‑on and spec breakdowns. Tom’s Hardware provides a detailed spec sheet aligning with the leak.
  • The 8.8‑inch OLED, 1920×1200, 144 Hz display and 74 Wh battery are likewise consistently reported in Lenovo’s own materials and independent reviews.
  • Lenovo previously shipped the Legion Go S in a SteamOS variant on May 25, 2025, so OEM‑level precedent exists for Lenovo to produce SteamOS factory images on Legion hardware. This release and subsequent inventory/pricing coverage are documented by Engadget, Windows Central, and Tom’s Guide.
Where the files and reporting diverge is primarily timing and pricing. The notion that SteamOS units could be cheaper because they omit Windows licensing is plausible, but the exact MSRP differential (15–20% or a flat ~$100 saving) is speculative until Lenovo publishes SKUs. Multiple leak write‑ups emphasize that price and regional availability are not confirmed.

Strategic rationale: why Lenovo might do this​

  • Broaden market appeal: offer a lower‑friction option for handheld‑first gamers who dislike Windows, while preserving Windows SKUs for power users who need desktop compatibility.
  • Differentiate in a crowded market: high‑end hardware plus a console‑like OS gives Lenovo a direct competitor against the Steam Deck and other Linux‑oriented devices.
  • Leverage prior partnership success: the Legion Go S SteamOS model gave Lenovo experience shipping SteamOS and revealed there is demand for a factory‑installed Linux handheld.
  • Reduce support complexity for certain use cases: for titles that run natively on Proton and for users who primarily use Steam, SteamOS simplifies driver and UX expectations by constraining the software surface.
These are practical, low‑risk benefits for Lenovo if Valve’s compatibility program and update cadence can scale to a high‑end, Z2‑powered device.

Key technical and ecosystem risks​

1. Anti‑cheat and multiplayer compatibility​

One of the biggest unresolved problems for SteamOS and Proton historically has been anti‑cheat and closed middleware. Many competitive multiplayer titles rely on Windows‑only anti‑cheat solutions that either don’t run on Linux or have limited support through compatibility layers. Valve and the broader community have made progress, and Valve has introduced compatibility labeling for third‑party SteamOS devices, but anti‑cheat remains a practical blocker for some players. Any OEM SteamOS device must clearly communicate which titles are fully playable to avoid buyer frustration.

2. Driver and firmware coordination​

Shipping a SteamOS SKU is not merely a matter of reinstalling an OS image. OEMs must coordinate firmware, driver optimization (especially for RDNA‑class GPUs), and thermal profiles. The Z2 Extreme is powerful but thermally hungry; maximizing sustained performance on handheld thermals depends on kernel, driver, and power‑management engineering that must be validated across representative workloads. Early reviewers will scrutinize thermal throttling, sustained 1% lows, and charging behaviors.

3. Update cadence and support model​

One attraction of SteamOS on Valve hardware is Valve’s ability to push targeted handheld updates. For a third‑party device, Valve and Lenovo must define how updates are delivered: Will SteamOS images on Lenovo hardware get the same timely fixes? Who takes responsibility for regression testing, and how will Valve’s compatibility program apply to a high‑end Z2 Extreme SKU? Without clear commitments, third‑party SteamOS users risk fragmentation.

4. Channel and regional availability​

Lenovo’s previous SteamOS rollout for the Legion Go S showed inconsistent availability across regions and channels; some markets saw stock issues and delayed shipments. If Lenovo repeats that roll‑out approach for a higher‑priced Legion Go 2, limited SKU availability or regional limitations could dampen impact and frustrate buyers. Community threads following the Go S launch highlighted regional stock and pricing variability that OEMs must plan around.

Consumer implications: who should care and why​

  • Gamers who want a console‑like, gamepad‑first handheld experience and who primarily play titles that are known to work under Proton will find a SteamOS Legion Go 2 attractive. They gain faster boot‑to‑play flows, potentially better battery life, and fewer background interruptions.
  • Power users and players dependent on Windows‑native titles, non‑Steam stores, or complicated middleware (for modding or game development workflows) will prefer the Windows SKU for broader app compatibility.
  • Buyers sensitive to price will watch for the MSRP differential; removing Windows licensing could lower cost, but OEMs often reallocate savings to bundle other features or margin, so the consumer benefit is not automatic.
Practical buying guidance: wait for independent reviews that test thermals, sustained framerate, anti‑cheat behavior, and real‑world battery life on both OS variants. Treat a CES announcement as the start of validation, not the end.

What to expect at CES 2026​

If Lenovo follows the leak’s script, CES will be used to:
  • Show a hardware demo comparing SteamOS and Windows builds on the same device.
  • Clarify SKU mapping, pricing, and regional availability for SteamOS models.
  • Announce Valve’s involvement: whether the Legion Go 2 will join a compatibility program and get Deck‑parity updates.
  • Outline support for anti‑cheat and publisher collaboration.
If Lenovo does not announce a SteamOS Legion Go 2 at CES, the company may still signal intent by highlighting third‑party SteamOS support or by demonstrating close Valve cooperation on other Legion hardware. Either way, expect lots of early hands‑on coverage and immediate third‑party testing around thermals and compatibility—a crucial period for buyer trust.

Strengths of the rumored move​

  • Lower friction for handheld gaming: A SteamOS factory image removes an entire category of setup and configuration headaches for mainstream players.
  • Performance potential: A leaner Linux stack plus optimized drivers can free thermal and power headroom for gaming.
  • Market segmentation without hardware duplication: Lenovo can sell multiple OS SKUs from the same bill of materials, attracting different buyer personas.
  • Stronger Steam/Valve integration: Being part of Valve’s growing ecosystem for third‑party devices offers visibility and support for the Steam library at scale.

Weaknesses and open questions​

  • Anti‑cheat compatibility is unresolved: Competitive online games remain the primary caveat for SteamOS handhelds, and buyers must assess title‑by‑title support.
  • Support complexity for high‑end hardware: The Z2 Extreme demands OEM/driver coordination; poor tuning could negate any software advantages.
  • Unconfirmed pricing and regional rollout: Without clear MSRP and stock plans, the strategic advantage could be muted by availability problems or unfavorable pricing decisions.

Recommendations for prospective buyers​

  • Wait for independent, third‑party reviews that include long‑duration thermal and battery tests on the SteamOS variant, not just Lenovo’s snappy demos.
  • Verify game‑compatibility on a personal wishlist; consult Valve’s compatibility labels and community resources for anti‑cheat notes.
  • Consider whether you need full Windows compatibility for non‑Steam apps; if so, buy the Windows SKU or plan to dual‑boot with caution.
  • Watch regional availability and warranty terms carefully; an imported SteamOS unit may have different support expectations than a locally purchased Windows device.

Conclusion​

A SteamOS‑preinstalled Legion Go 2 makes strategic sense for Lenovo and is technically plausible: the company already shipped a SteamOS Legion Go S, the Legion Go 2 hardware is documented and high‑end, and Valve’s push to broaden SteamOS beyond the Deck provides the necessary software scaffolding. Independent reporting and hands‑on spec sheets corroborate the core hardware claims—Ryzen Z2 Extreme, 8.8‑inch OLED 144 Hz, 32 GB LPDDR5X, and a 74 Wh battery—so the rumor is credible from a hardware perspective. The biggest question marks are not whether Lenovo can make a SteamOS image, but whether Valve and OEM engineering can deliver a seamless, well‑supported SteamOS experience on a power‑hungry, premium handheld without exposing buyers to anti‑cheat gaps, driver regressions, or patching confusion. CES 2026 should clarify Lenovo’s intent, pricing, and Valve’s certification commitments; until then, the story ought to be treated as a compelling and plausible leak rather than a done deal.
For handheld PC enthusiasts, the prospect is exciting: an option that marries top‑tier, Z2‑class performance with a console‑like UX could redefine expectations for battery life, sustained frame‑rate behavior, and out‑of‑the‑box simplicity. For competitive players and those who rely on Windows‑only tools or games, the Windows Legion Go 2 remains the safer bet. The real winner will be whoever delivers the right combination of compatibility signaling, update cadence, and honest, transparent SKU design — and CES 2026 will be where Lenovo gets its first, public grade.

Source: The Outerhaven Lenovo Legion Go 2 With SteamOS Reportedly Set for CES 2026 Reveal | The Outerhaven
 

Blue-lit server racks showcasing NVMe tech and performance metrics in a futuristic data center.
Microsoft’s storage team has quietly delivered one of the most consequential I/O changes to Windows in years: a native NVMe storage path that removes the decades‑old SCSI translation layer and, when enabled, can produce measurable SSD performance gains — and a community of enthusiasts has already found ways to flip that switch on Windows 11, with both promising benchmarks and non‑trivial stability and compatibility risks.

Background / Overview​

For much of Windows’ history the operating system exposed block storage through a SCSI‑style abstraction. That model simplified compatibility across hard drives, SATA SSDs, SAN devices and NVMe SSDs by making every block device look like a common “disk” class to the kernel and userland. Translating modern NVMe semantics into that SCSI presentation introduced extra work: per‑I/O translation, locking and queue serialization that increasingly limit throughput and add latency on today’s highly parallel NVMe hardware. Microsoft’s new native NVMe path removes that translation and exposes NVMe semantics directly to the kernel — a modernization built primarily for high‑concurrency, IOPS‑heavy server workloads. Microsoft shipped this capability as part of Windows Server 2025’s servicing wave and documented an opt‑in enablement route for Server administrators. The Server release includes published microbenchmark parameters and lab figures that show very large uplifts (up to roughly ~80% higher IOPS in specific DiskSpd 4K random read tests and ~45% fewer CPU cycles per I/O under those microbenchmark conditions). These numbers are lab results on enterprise testbeds and are reproducible only under the same conditions; they are not a guarantee of identical improvement for every consumer PC. That Server‑side modernization is the safe, supported path — but because much of Windows’ kernel and driver infrastructure is shared between Server and Client SKUs, community researchers discovered the native NVMe components are already present in recent Windows 11 builds. By adding certain FeatureManagement override entries to the registry, testers can switch many client systems to the native NVMe class driver. This has produced an avalanche of test reports, benchmark runs, and cautionary warnings across forums and social media.

What “native NVMe” actually changes (technical primer)​

NVMe vs SCSI: why the difference matters​

NVMe (Non‑Volatile Memory Express) was designed from the ground up for PCIe‑attached flash media. Key architectural features include:
  • Massive parallelism — NVMe supports many thousands of queue pairs and deep per‑queue depth values.
  • Per‑core queueing — controllers and OS paths can achieve low contention by steering queues to CPU cores.
  • Low per‑command overhead — submissions and completions are designed to be light and fast compared with legacy block abstractions.
The traditional Windows approach funneled NVMe traffic through a SCSI‑oriented model (StorNVMe’s SCSI translation support), which was convenient for compatibility but created serialization, translation overhead, and additional kernel locking that reduce performance as queue depths and concurrency rise. A native NVMe path eliminates that forced mapping and aligns the kernel I/O plumbing to the protocol the hardware was designed for.

What the native path does under the hood​

  • Exposes NVMe multi‑queue semantics directly to the kernel, enabling better per‑core queue affinity.
  • Removes the per‑I/O SCSI translation layer and the associated context switches.
  • Reduces kernel locking and serialization points — lowering CPU cost per I/O and improving tail latency under high parallelism.
  • Allows the OS to better leverage vendor and device NVMe features, multi‑namespace devices, and high queue depths without unnecessary translation penalties.
These changes are most visible on workloads that produce many small, parallel I/Os (4K random reads/writes at high queue depth), which is why Microsoft’s lab microbenchmarks targeted those scenarios.

The official rollout: Windows Server 2025 (supported path)​

Microsoft published the native NVMe capability as an opt‑in feature for Windows Server 2025 and provided guidance for enterprises on how to validate and enable it after applying the servicing update in which the feature shipped. The Server toggle and the associated Group Policy / FeatureManagement override are documented by Microsoft as the supported path for production systems. Microsoft also released the DiskSpd command line and hardware list used for lab tests to make the microbenchmarks reproducible in controlled environments. Why Microsoft placed this feature behind an opt‑in for Server and left it disabled by default:
  • Enterprise workloads can be heavily I/O‑sensitive and benefit the most, so admins need controlled rollout and validation.
  • Kernel I/O changes touch many subsystems: backup, imaging, vendor tools, management controllers and hypervisor interactions all need testing.
  • Server environments commonly use higher‑end NVMe hardware where the relative gains are larger; consumer gains are more variable.

The community discovery: flipping the switch on Windows 11 (unsupported)​

Enthusiasts discovered that recent Windows 11 servicing packages include the native NVMe components and that a set of numeric FeatureManagement override DWORDs applied under the same registry Overrides path can cause eligible client systems to load the native NVMe driver instead of the legacy SCSI presentation.
  • The official Server FeatureManagement override published by Microsoft uses a documented numeric ID and a supported process for Server.
  • Community‑reported client override IDs — circulated widely on Reddit, TechPowerUp and enthusiast forums — are undocumented internal IDs and therefore unsupported by Microsoft. Tests using these client IDs have been successful on many machines, but they are community‑driven and can be unstable.
Commonly circulated community keys (present in many forum posts) include three DWORD values placed under:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides
Those community IDs are widely shared online, but they are not Microsoft documentation and may change or be revoked in future builds. Treat them as experimental.

Benchmarks and real‑world results: what people are seeing​

Community and independent media testing has produced a broad distribution of results:
  • Microsoft’s server lab microbenchmarks show up to ~80% higher IOPS (specific DiskSpd 4K random read profiles) and ~45% less CPU cost per I/O in their controlled enterprise testbeds. These are synthetic but reproducible under the same hardware and workload parameters.
  • Enthusiast desktop tests often show single‑digit to low‑double‑digit percentage gains in throughput for many consumer SSDs, but significant wins in random I/O (4K) workloads are common. Many users report 5–20% improvements in day‑to‑day random I/O metrics and benchmark scores; some drives with particular controllers or firmware show larger delta.
  • A handful of community benchmark runs and independent outlets reported dramatic gains in specific cases: Tom’s Hardware, PCGamesN and other outlets highlighted tests where random write or 4K random I/O improved substantially on select drives, with a few isolated reports claiming even higher uplifts (e.g., very high single‑digit to double‑digit or greater in specialized conditions). Those extreme outliers are heavily hardware‑dependent and not typical for the average consumer.
Why the spread? NVMe performance depends strongly on controller design, firmware behavior, PCIe generation (PCIe 3.0 vs 4.0 vs 5.0), vendor drivers, and whether the device was performing vendor‑specific queuing or bypassing the in‑box stack prior to the change. Systems already using vendor‑supplied NVMe drivers that bypass Microsoft’s stack may see little or no change.

Compatibility, stability, and known risks​

Switching the storage presentation layer at the kernel level is inherently risky. Community testing and vendor writeups have documented several concrete hazards:
  • Third‑party SSD tools and vendor management utilities may fail to find or properly manage drives after the switch. Tools like Samsung Magician, Western Digital Dashboard and others may break, be unable to update firmware, or misreport devices.
  • Backup, imaging and restore tools that rely on disk identifiers or SCSI‑style presentation can fail to locate volumes or restore correctly after a presentation change. This can break scheduled backups or imaging workflows.
  • Some users have reported worse performance or higher tail latency on particular models after switching the driver, and a small number of testers reported BSODs and Safe Mode boot issues when the unsupported registry toggles were applied. Treat community‑sourced stability reports as actionable warnings.
  • Vendor drivers may implement their own NVMe optimizations and therefore not benefit from Microsoft’s in‑box native path; in other words, if a drive is already using a vendor‑optimized driver that bypasses the Microsoft stack, you may see no effect.
Because altering the driver path touches fundamental storage behavior, the safest approach for production systems remains to wait for an official, fully supported client rollout or validated vendor driver updates for your SSD model.

Practical recommendations and safe testing checklist​

For power users, IT pros and storage enthusiasts who want to experiment, follow a strict validation routine. Do not run these experiments on a production machine without complete backups and a tested recovery plan.
  1. Backup Reality Check
    1. Make a full, verified disk image of your system drive (not just file backup). Ensure recovery media is tested.
    2. Create a System Restore point and export any critical configurations.
  2. Use a disposable test platform
    • Prefer a secondary machine, a dedicated test SSD, or a virtualized lab environment where the drive is non‑critical.
    • Avoid testing on your daily driver or a primary work laptop.
  3. Update firmware and drivers
    • Update SSD firmware to the latest vendor release.
    • Update motherboard/UEFI firmware and chipset/NVMe related drivers.
    • Confirm whether your NVMe is using the in‑box Microsoft driver or a vendor driver (Device Manager → driver details). If a vendor driver is already present, you may not see any change.
  4. Staged enabling (if you proceed)
    • Apply validation on a test system only.
    • Monitor for unexpected device duplication in Device Manager, broken manufacturer tools, or altered drive IDs that affect backup/restore.
  5. Recovery plan
    • If you see corruption, BSoD, or missing drives in recovery tools, have a tested alternate boot media and image restore plan ready.
    • Know how to remove the override values to attempt rollback if necessary.
  6. Wait for vendor validation
    • Prefer vendor‑certified driver updates or official Microsoft client guidance before moving an experimental change into production.

The corporate angle: why Microsoft prioritized Server first​

Microsoft made a deliberate choice to ship native NVMe for Windows Server 2025 first and as opt‑in, and for good reasons:
  • Workload fit — Server workloads (virtualization, databases, Storage Spaces Direct, AI/ML scratch) are more likely to be IO‑bound and to benefit from lower CPU per‑I/O and improved tail latency.
  • Hardware scale — Enterprise NVMe and HBA hardware expose far higher headroom where the legacy stack was the limiting factor; lab tests on dual‑socket hosts with enterprise NVMe show the biggest deltas.
  • Validation requirements — Enterprise operators demand predictable, vendor‑validated behavior across management, backup and hypervisor ecosystems; Microsoft therefore provided an opt‑in route with documented test artifacts for administrators.
For consumer Windows 11, the usability and compatibility surface is far broader — many third‑party tools, OEM utilities, and vendor drivers interact with storage, which complicates a general client rollout until vendors and Microsoft complete joint validation.

Bottom line: who should care, and what to do next​

  • Enterprise admins and data center operators — This is a clear win to evaluate in lab: native NVMe reduces CPU cost per I/O and can raise IOPS headroom on modern enterprise NVMe and NVMe‑oF devices. Test using Microsoft’s published DiskSpd parameters and staged rollouts.
  • Enthusiasts and storage professionals — There’s real potential for better responsiveness and higher random‑IO performance on the right hardware. Experimental client toggles exist and many early testers report meaningful improvements in 4K random workloads — but proceed only with full backups and in a test environment.
  • Everyday users and gamers — For the majority, switching the driver today is unlikely to yield dramatic perceptible improvements in daily tasks or game load times. Modern NVMe drives already feel fast, and typical desktop workloads rarely saturate the legacy stack enough for large gains. Wait for vendor‑certified client updates or official Microsoft guidance.

Final assessment — progress, promise, and prudence​

Microsoft’s native NVMe work is a necessary and long‑overdue modernization that aligns Windows with the way high‑performance flash is architected. The engineering case is sound: removing unnecessary SCSI translation and exposing NVMe’s queue model reduces kernel overhead and unlocks measurable headroom on modern hardware. Microsoft’s lab figures are credible for the conditions cited, and independent testing confirms the direction — though the magnitude of gains varies widely across hardware, firmware and workload mixes. That said, turning a kernel‑level storage feature on a consumer system using undocumented client flags is experimental and comes with real hazards: backup/restore breakage, vendor tool incompatibility, errant disk presentation, and even boot‑time failures in some community reports. Until Microsoft publishes an official consumer‑SKU enablement path or SSD vendors release validated client drivers that take advantage of the native stack, the prudent route for most users is to observe, test in isolated environments, and prepare to revert. NVMe SSD performance in Windows just hit a new frontier — the OS is finally catching up to the hardware. The gains are real, and the engineering leap could reshape how Windows handles flash at every level from consumer M.2 drives to enterprise NVMe farms. But the transition needs careful validation and vendor cooperation before it’s safe to flip for every PC. For those who test it now: backup first, test in a sandbox, and expect surprises.

Source: Digital Trends Your Windows SSD Could Be Faster, Microsoft’s New Update Reveals Why
 

NVIDIA’s H100 GPU has moved from niche research hardware to the fulcrum of a global AI buildout — and that shift is rewriting data‑center economics, corporate procurement strategies, and the investment case for NVDA in ways investors and IT decision‑makers must now treat as first‑order risks and opportunities.

Blue-lit data center with rows of server racks and glowing metric dashboards.Background / Overview​

NVIDIA’s H100 and its rack‑scale integrations (HGX/DGX) have become the reference architecture for training and serving large language models and other generative AI workloads. Hyperscalers, cloud providers, and enterprise AI programs increasingly standardize on H100‑class accelerators because they combine high‑density tensor compute, HBM memory bandwidth, and NVLink/NVSwitch fabric optimized for multi‑GPU synchronization. Those technical attributes translate directly into faster time‑to‑market, improved energy efficiency at scale, and a growing software‑driven lock‑in via CUDA and related toolchains.
The commercial story is tightly coupled to that technical story: H100‑class systems have commanded premium system prices and created large, visible backlogs at OEMs and cloud providers, which in turn has been a dominant driver of NVIDIA’s data‑center revenue expansion in recent reporting cycles and industry commentary. That dynamic — premium ASPs for integrated systems, recurring cloud consumption, and the prospect of expanding software monetization — is the heart of the bullish thesis investors keep citing.

What the H100 Actually Solves — The Technical and Business Anatomy​

The engineering problem: matrix math at scale​

Large neural networks are dominated by matrix multiplies and tensor operations. CPUs are ill‑suited to that workload; the H100’s tensor cores and high‑bandwidth memory (HBM) deliver orders‑of‑magnitude improvements in throughput and performance‑per‑watt for these operations. That’s why frontier model training — hundreds of billions to trillions of parameters — is practically impractical without H100‑class accelerators in many configurations.

System-level advantages​

The H100 is more than a single chip. NVLink and NVSwitch create tight coherence domains across multiple GPUs inside servers and racks, enabling large collective operations with lower communication overhead. Those rack‑scale primitives make certain model topologies and algorithms cost‑effective only on NVLink‑dense infrastructure, which amplifies real‑world portability friction and increases the effective switching cost for customers.

Business outcomes: speed, energy, and productivity​

  • Speed: Reduced wall‑clock training times shorten research iteration cycles, directly affecting competitiveness for any company building models.
  • Energy efficiency at hyperscale: More performance per watt allows hyperscalers to densify AI compute within existing power envelopes.
  • Developer productivity and lock‑in: CUDA, cuDNN, TensorRT and NVIDIA’s enterprise software reduce the engineering cost to go live, creating a practical moat for NVIDIA’s platform.

Market Dynamics: Why H100‑Class GPUs Command the Narrative​

Hyperscaler capex and cloud availability​

US hyperscalers and large cloud providers have driven a capital spending supercycle focused on AI infrastructure. H100 instances (and successor classes) are broadly available in hourly instance form across major clouds, converting enterprise demand into recurring billed consumption and consistent hardware draw for NVIDIA. Those cloud commitments underpin much of the durable demand thesis.

Pricing and realized ASPs​

Industry commentary and procurement analyses indicate H100‑class hardware sells at premium system prices once you include chassis, networking, and integration. Directional market bands reported across the industry put system prices broadly in the mid‑to‑high five‑figure range per card when embedded in systems ($25,000–$40,000 per unit is a commonly cited estimate in early generation windows), though these are approximate and should be treated as estimates pending OEM quotes. That premium explains why early availability windows saw constrained supply and elevated realized prices per GPU‑hour.

Software and services as margin multipliers​

As hardware ASPs mature, NVIDIA’s path to sustaining high margins is layered through software, orchestration, and marketplaces (NVIDIA AI Enterprise, model runtimes, licensing). The more the company can convert hardware customers into recurring software licensees or cloud marketplace participants, the more it can smooth revenue through hardware cycles. Market discussion highlights this as a central pillar of the long‑term bull case.

The Investment Picture: Where the Opportunity and Risk Concentrate​

The bullish case (why NVDA can still compound)​

  • AI adoption is early: Many enterprises remain in pilot or early deployment. If AI becomes as ubiquitous as mobile/cloud, addressable compute demand could rise materially beyond current expectations.
  • Ecosystem lock‑in: The installed base of CUDA and optimized model toolchains creates non‑trivial switching costs. Even when alternatives exist, migration requires engineering time and risk — a moat in practical terms.
  • Optionality beyond GPUs: NVIDIA’s opportunities in automotive, robotics, edge AI, and software marketplaces provide additional revenue streams that can help fund R&D and smooth cyclicality.

The cautious/bear case (how the premium can unwind)​

  • Valuation sensitivity: Shares are priced with high expectations for sustained AI capex. If orders normalize or macro tightens, multiples can reprice quickly. Simulated market snapshots in industry commentaries explicitly warn that much of the future is already priced in, so downside can be fast.
  • Credible competition: Hyperscaler custom ASICs (Google TPUs, AWS Trainium) and rival accelerators (AMD MI300 series and others) are closing performance and software gaps. For many cloud‑native workloads, hyperscalers can capture cost advantages by routing workloads to their native silicon. Independent reporting shows these alternatives are material in some workloads, even if not uniformly substitutive at the frontier.
  • Capex cycle risk: GPU procurement is cyclical. If enterprises stall on fleet upgrades or hyperscalers rebalance, NVIDIA faces inventory swings and ASP compression. Historical tech cycles remind us these swings can be sharp.
  • Regulatory and geopolitical overhangs: Export controls and region‑specific constraints can limit addressable markets for advanced accelerators, forcing product variants and complicating global contracts. This is a live policy risk cited across industry analyses.

Verifying Key Technical and Commercial Claims (and What Is or Isn’t Verifiable)​

Industry reporting and community analyses corroborate the broad technical claims about H100 leadership — tensor cores, HBM bandwidth, and NVLink fabric are repeatedly cited as the differentiators enabling frontier model training and dense rack deployments. Multiple reports align on the same functional points: H100‑class platforms materially reduce training time and improve efficiency, and NVLink‑dense racks can create tangible portability friction for models. These are high‑confidence statements backed by multiple independent write‑ups.
Price bands for H100‑class integrated systems (the $25,000–$40,000 per‑unit directional range) appear across vendor commentary and forum analyses, but they are estimates reflecting early scarcity premiums and system integration levels; they should be treated as directional, not contractual, until you have OEM quotes. Simulated stock price snapshots, analyst price targets, and one‑year return examples in the narrative are explicitly hypothetical in the original materials and must not be used as live investment inputs — they are scenario illustrations.
Where the public material is thin: precise long‑run unit demand curves, margin decompositions at the SKU level, and future cadence/performance metrics for successors (e.g., a hypothetical “B100” or Blackwell‑class successor) are often teased by vendors and guests but require primary disclosures or measured benchmarks to verify. Treat any roadmap performance claims as contingent until validated by product briefings, independent benchmark results, or disclosures in earnings filings.

Competition and the Credible Threat Matrix​

Hyperscaler silicon (Trainium, TPUs, and in‑house ASICs)​

Hyperscalers have a strategic incentive to reduce third‑party GPU spend by routing workloads to their own silicon when cost‑performance favors it. AWS Trainium and Google TPU are both designed to offer competitive price‑performance in the hyperscaler native environments; independent reports show they can be cost‑competitive for many workloads, though porting friction and certain latency‑sensitive cases still favor NVIDIA in many deployments. This makes hyperscaler silicon a meaningful, if not universal, threat to the high‑end GPU monopoly.

AMD and other third‑party accelerators​

AMD’s MI300 and other accelerator entrants are narrowing hardware performance and software gaps. Success here depends on driver stability, tooling parity, and developer adoption; a sustained multi‑vendor competitive landscape could force price compression on commodity workloads even while NVIDIA retains leadership in the ultra‑frontier segment. Forum analyses emphasize that replacing NVIDIA at the frontier requires not just raw hardware parity but comparable ecosystem buy‑in.

The portability paradox​

A recurring theme is that models and training pipelines optimized for NVLink‑dense racks are not trivially portable. That portability friction amplifies NVIDIA’s moat even if raw FLOPS parity emerges elsewhere, which could slow the rate at which hyperscalers and enterprises switch vendors. However, this is a strategic advantage that can decay over time as tooling improves and as clouds build ecosystems for alternative accelerators.

Practical Guidance — How Investors and IT Buyers Should Think About H100 Today​

For investors: a disciplined, scenario‑based framework​

  • Define scenarios explicitly: bull (multi‑year elevated capex), base (elevated but normalizing), and bear (capex trough + competition + regulatory shock). Assign probability and model multiples under each scenario.
  • Size positions to risk: use smaller initial allocations with systematic add rules on objective pullbacks or on‑chain (orderbook/consumption) evidence. Consider using options to hedge around earnings and policy headlines.
  • Watch leading indicators: hyperscaler capex commentary, cloud instance pricing/availability, and NVIDIA software monetization cadence (marketplaces, license revenues) are more informative than short‑term share moves.

For procurement and IT decision‑makers​

  • Require workload‑level benchmarks. Peak TFLOPS or TOPS are directional; insist on throughput and latency results using your models, quantization schemes, and runtimes.
  • Design for portability where feasible. Use ONNX, containerized runtimes, and abstraction layers to reduce lock‑in risk, accepting that this may reduce peak efficiency but increase strategic optionality.
  • Pilot, measure ROI, and map GPU‑hour consumption to measurable business outcomes. Require conversion thresholds from pilot to capex before committing to large multi‑year purchases.

Regulatory, Geopolitical and Operational Overhangs​

Export controls and region‑specific restrictions on advanced accelerators remain a tangible risk. Historical precedents show NVIDIA has adjusted product variants or withheld features to comply with policy, and any future tightening could shrink addressable markets for top‑end training gear. Investors should explicitly model export‑control scenarios and regional demand shifts.
Operationally, data‑center power constraints and logistics (lead times, rack integration complexity) can limit how quickly customers can absorb additional GPU inventory, introducing operational ceilings to near‑term revenue growth. That’s why performance‑per‑watt roadmaps matter as much as raw throughput.

What to Monitor Next — A Short Watchlist for Investors and IT Leaders​

  • Hyperscaler capex commentary and cloud instance SKU changes (H100/Blackwell instance launches or de‑lists).
  • NVIDIA’s software monetization cadence: growth in NVIDIA AI Enterprise, marketplace take rates, and recurring licensing revenue.
  • Competitor benchmarks and third‑party independent testing for Trainium/TPU/MI300 on production workloads.
  • Policy actions around export controls, as incremental changes can alter international TAM materially.

Conclusion — Follow the GPUs, But Model the Possibilities​

The H100 era is not merely a hardware fad; it is the nucleus of a platform that combines high‑performance silicon, rack‑scale systems, and a rich software ecosystem that together reframe how organizations build and deploy generative AI. That platform effect — speed, energy efficiency, and developer lock‑in — is real and has substantial commercial consequences. Multiple independent industry analyses converge on this fundamental point, underlining why NVIDIA’s H100‑class GPUs have become central to modern AI infrastructure narratives.
For investors, the thesis is straightforward but conditional: if durable, broad‑based AI capex continues and NVIDIA successfully expands software and services monetization, the company’s premium valuation could look justified. Conversely, if hyperscalers materially shift workloads to competitive silicon, macro capex tightens, or regulation constrains shipments, the premium can reprice quickly. The right approach is scenario planning, active monitoring of front‑line indicators, and position sizing that reflects both the upside and the substantial tail risks.
For IT leaders, the pragmatic path is to benchmark actual workloads, insist on measurable ROI before large capex, and design for portability where strategic. The H100 will win many battles for the foreseeable future, but durable success for organizations will depend on aligning procurement with measurable outcomes and on maintaining agility as an increasingly heterogeneous accelerator market develops.
NVIDIA’s H100 and its successors have reshaped the data‑center conversation: they are the engines that accelerate AI progress, and they are now essential inputs into both enterprise architecture choices and investment theses. Understanding how those GPUs translate into business value — and what could undermine that value — is table stakes for anyone serious about technology investing or AI infrastructure procurement today.

Source: AD HOC NEWS NVIDIA’s AI Chips Are Eating the World: What the H100 (and Its Successors) Mean for Investors Now
 

Back
Top