Microsoft’s contribution of the RAMDAX driver to the upstream Linux tree marks a significant step in treating carved-out RAM as first-class persistent-memory devices, exposing in-memory regions as NVDIMM DIMMs and enabling flexible DAX namespaces for virtualized and cloud workloads.
Microsoft engineer Mike Rapoport has posted a new driver, RAMDAX, to the libnvdimm development branch with the express goal of turning RAM carveouts (memmap-defined ranges and dummy pmem-region DT nodes) into libnvdimm DIMM devices that can present DAX namespaces to the system. The patch and its accompanying Kconfig entries explicitly describe the driver as one that manages memmap-created or device-tree pmem-region memory ranges as DIMMs, implements a small on-region metadata area for namespace management, and exposes the memory via standard persistent-memory interfaces. The driver has been queued in the nvdimm.git “libnvdimm-for-next” branch and is expected to enter the mainline merge window for Linux 6.19, making it likely to appear in the 6.19 release cycle. That timing places RAMDAX among other storage, memory, and driver updates slated for the 6.19 kernel.
RAMDAX offers a more dynamic, software-driven approach: carve RAM into regions, bind RAMDAX, and use libnvdimm/ndctl to create and manage namespaces without firmware updates or invasive platform configuration changes. That promises better operational agility on compute hosts and cloud hypervisors.
Operators must not confuse the appearance of persistence (DAX/devdax) with the actual physical durability of the underlying media: carved RAM will typically be lost on host reboot or power loss unless the platform explicitly wires that memory to a battery-backed or otherwise non-volatile medium. Any deployment planning to rely on data durability must validate the actual persistence guarantees of the platform and not assume RAMDAX offers non-volatile retention by itself. This distinction is important and must be documented in operator runbooks.
Source: Phoronix Microsoft's RAMDAX Driver Merged For Linux 6.19 To Carve Out RAM As NVDIMM Devices - Phoronix
Background / Overview
Microsoft engineer Mike Rapoport has posted a new driver, RAMDAX, to the libnvdimm development branch with the express goal of turning RAM carveouts (memmap-defined ranges and dummy pmem-region DT nodes) into libnvdimm DIMM devices that can present DAX namespaces to the system. The patch and its accompanying Kconfig entries explicitly describe the driver as one that manages memmap-created or device-tree pmem-region memory ranges as DIMMs, implements a small on-region metadata area for namespace management, and exposes the memory via standard persistent-memory interfaces. The driver has been queued in the nvdimm.git “libnvdimm-for-next” branch and is expected to enter the mainline merge window for Linux 6.19, making it likely to appear in the 6.19 release cycle. That timing places RAMDAX among other storage, memory, and driver updates slated for the 6.19 kernel. What RAMDAX Does — Technical Summary
Exposing RAM carveouts as NVDIMM DIMMs
- RAMDAX converts memory regions allocated by kernel command-line memmap= options (on x86) or dummy pmem-region nodes (on device-tree platforms) into libnvdimm DIMM devices. These appear to the kernel and userspace as NVDIMM devices that can host DAX namespaces (fsdax/devdax).
- The driver integrates with the libnvdimm subsystem and is selectable through a new Kconfig option, CONFIG_RAMDAX, which defaults to being selected when libnvdimm support is enabled. Its entry explains the intended use-cases and binding strategy.
Namespace metadata and capacity
- RAMDAX reserves a tiny metadata area at the end of the carved memory region — the implementation notes state the driver “steals 128K in the end of the memmap range for the namespace management.” This in-region label area is used to manage a namespace label space and allows the driver to support up to 509 namespaces on a single carved region via the ndctl-style namespace management model.
- Namespaces created on RAMDAX devices are consumable by existing userspace tooling that understands libnvdimm/DAX semantics (for example, the ndctl/ndctl create-namespace toolset) and thus integrate into the same management workflows used for persistent memory (PMEM) devices.
How the memory is defined and bound
- The driver expects to be force-bound to the platform’s e820_pmem or pmem-region platform devices — typically through the driver_override sysfs attribute — allowing administrators to assert that a particular memmap region (or pmem-region node) is to be managed by RAMDAX. This avoids automatic binding ambiguity and gives operators explicit control.
- Supported creation methods are:
- Kernel command line memmap= options (x86): operators can carve off a physical RAM range at boot and mark it as PRAM/pmem for the kernel.
- Dummy pmem-region device tree nodes (DT platforms): device-tree authors can add pmem-region stubs that are then claimed by the RAMDAX driver at runtime.
Why this Matters: Practical Use Cases and Benefits
Flexible, dynamic “persistent” memory for VM hosts
Virtual machine hosts and hypervisor environments often want to present fast, byte-addressable memory to guests for cache layers, in-memory DBs, or performance-critical temporary datasets. Historically, carving memory for DAX required static firmware changes or boot-time memmap settings that were inflexible and required host reboots to change.RAMDAX offers a more dynamic, software-driven approach: carve RAM into regions, bind RAMDAX, and use libnvdimm/ndctl to create and manage namespaces without firmware updates or invasive platform configuration changes. That promises better operational agility on compute hosts and cloud hypervisors.
Uses that benefit immediately
- High-performance caching layers for VMs: present devdax/fsdax to guests for direct-memory I/O without storage-device latency.
- Test/dev and CI workloads: emulate PMEM-backed storage stacks cheaply and at higher speed using RAM rather than NVMe-attached PMEM or persistent NVDIMMs.
- Cloud provider internal tooling: cloud operators can dynamically partition host RAM into disposable, high-throughput namespaces for transient workloads while keeping isolation semantics.
Integrates with existing tools and workflows
Because RAMDAX implements standard libnvdimm DIMMs and DAX namespaces, existing userspace tooling such as ndctl, and subsystem drivers (DAX, PMEM, btt/pfn) can be reused. This reduces the integration cost and leverages a mature ecosystem for namespace creation, labeling, and access modes.Strengths: What RAMDAX Brings to the Table
- Operational flexibility: Unlike static memmap or DT-only approaches, RAMDAX supports dynamic layout and namespace changes without requiring host firmware updates or full reboots to change the layout semantics. This is particularly useful in multi-tenant or cloud-hosted VM farms where agility is important.
- Standards alignment: RAMDAX exposes memory via the libnvdimm/NVDIMM model and supports DAX modes; this allows immediate consumption by existing persistent-memory stacks and utilities, avoiding bespoke APIs or new tooling.
- Small, targeted kernel integration: The submitted patch is a single-driver addition plus Kconfig and Makefile entries and does not attempt to rework libnvdimm broadly. That surgical approach reduces regression risk and simplifies backporting or vendor integration. The fact that it is queued in libnvdimm-for-next indicates it followed the established upstream workflow for memory-device contributions.
- Low barrier for experimentation: Because the underlying mechanism can operate with memmap carveouts or dummy pmem-region nodes, users and integrators can experiment in labs or staging without specialized hardware. This makes it accessible for developers, QA, and cloud operator testbeds.
Risks, Caveats, and Operational Warnings
Volatility vs. “persistence” semantics — important to clarify
The RAMDAX driver treats carved-out RAM as NVDIMM-style DIMM devices to the kernel and userspace; however, the underlying media is still volatile host RAM unless combined with other platform features. The driver’s “persistent memory interfaces” label refers primarily to the interface semantics (byte-addressable DAX namespaces, libnvdimm controls), not that the data magically survives a host power cycle or reboot in the general case.Operators must not confuse the appearance of persistence (DAX/devdax) with the actual physical durability of the underlying media: carved RAM will typically be lost on host reboot or power loss unless the platform explicitly wires that memory to a battery-backed or otherwise non-volatile medium. Any deployment planning to rely on data durability must validate the actual persistence guarantees of the platform and not assume RAMDAX offers non-volatile retention by itself. This distinction is important and must be documented in operator runbooks.
Data safety and isolation concerns in multi-tenant hosts
Exposing parts of host RAM as DAX to guests or processes may raise complex security and isolation trade-offs:- Misconfiguration risk: incorrectly binding a region or failing to isolate namespaces could expose sensitive data to unintended VMs.
- Tenant safety: a guest that corrupts or misuses its DAX namespace could disrupt host-level memory layout assumptions if not properly constrained.
- Snapshot and migration semantics: VM snapshot or live-migration flows will need explicit handling for DAX-backed memory namespaces; without coordinated tooling, migrations may break consistency or leak data.
Persistence illusions and lifecycle management
Because RAMDAX reserves a small on-region metadata area (128 KiB) to manage label space and namespace metadata, operators need to be mindful that:- The driver reduces the usable region by the metadata reservation.
- The maximum namespace count (reported as up to 509) is finite and depends on the label-space design; operators should plan namespace allocation accordingly.
Compatibility and distribution support
- RAMDAX appears in the libnvdimm-for-next branch and will need to be picked up by distribution kernels and vendor kernels for operators to receive it without custom kernels. Vendor packaging lag—especially in embedded or appliance images—remains the normal long-tail risk for kernel features.
- The Kconfig entry depends on platform knobs (X86_PMEM_LEGACY or OF) and on LIBNVDIMM; distributions that compile kernel variants without these options will not enable RAMDAX. System integrators must verify kernel configs in their environments.
Testing and validation needs
- The approach must be validated against:
- VM lifecycle (boot, restart, suspend/resume)
- Crash and reboot recovery semantics
- Namespace management with ndctl and other userland tools
- Security isolation between namespaces and between host/guest
Implementation Notes and Developer Considerations
Driver binding and device model
The driver’s recommended binding approach — using driver_override to attach RAMDAX to e820_pmem or pmem-region devices — gives fine-grained control and avoids accidental claims of unrelated memory regions. That design choice aligns with conservative upstream driver models that prefer explicit binding for sensitive platform resources.Metadata layout and label space
The decision to keep namespace metadata in-region (the 128 KiB at the region end) is pragmatic: it simplifies driver deployment because no separate metadata device or storage is required. However, it also means:- The metadata is tied to the region’s lifetime.
- Corruption of that area (for example, by a buggy user process that circumvents DAX protections) could render the label space unusable.
Integration with ndctl/libnvdimm tooling
Because the device model maps to libnvdimm, existing userspace utilities should be able to manage RAMDAX-exposed namespaces with minimal changes. That is a major win for adoption and testing: operations teams can reuse ndctl flows for creation, labeling, and mode selection (fsdax vs devdax).Recommended Operational Playbook (Concise)
- Inventory:
- Identify hosts where memmap carveouts or pmem-region DT nodes are used or would be useful.
- Identify kernel configurations (ensure LIBNVDIMM and appropriate PMEM options are enabled).
- Lab validation:
- Deploy RAMDAX-enabled kernels on isolated hosts.
- Exercise ndctl create/namespace workflows, test fsdax/devdax use cases with representative workloads.
- Test reboot and crash-recovery semantics to verify assumptions about data retention.
- Security and isolation:
- Harden host controls and restrict driver_override operations to privileged automation.
- Ensure orchestration tooling prevents accidental exposure of host-critical memory to untrusted guests.
- Staged rollout:
- Pilot on non-critical hypervisors; gather telemetry for performance and correctness.
- Monitor dmesg, kernel logs, ndctl outputs, and VM behavior.
- Documentation and runbooks:
- Document exactly what “persistence” means in your deployment (most likely volatile across host reboot unless additional platform persistence exists).
- Train operations staff on namespace lifecycle, ndctl usage, and emergency recovery.
Broader Implications — Cloud, Hypervisors, and the PMEM Ecosystem
RAMDAX signals an ongoing trend in the Linux ecosystem: treating flexible, software-defined memory regions as first-class resources. For cloud providers and hypervisor vendors, this is attractive because it allows:- Faster, cheaper experimentation with DAX-like semantics without requiring specialized hardware.
- New efficiency and performance options for transient workloads and caching layers.
- Consolidation of memory- and storage-tier tooling around libnvdimm’s APIs.
Conclusion
RAMDAX is a focused, practical kernel contribution that converts carved RAM ranges into libnvdimm DIMMs with DAX namespaces — a capability that will matter most to virtualized hosts, cloud operators, and teams that want flexible, high-throughput in-RAM namespaces without firmware-level changes. The patch is small, well-scoped, and leverages existing libnvdimm and ndctl tooling, which fast-tracks experimentation and adoption. At the same time, RAMDAX introduces operational and conceptual risks that demand careful attention: the interface’s “persistent memory” semantics do not automatically imply physical persistence across host failure, namespace lifecycle management and metadata protection require disciplined automation, and multi-tenant isolation must be proven at the orchestration layer. Early adopters should validate crash/reboot behavior, enforce strict binding controls, and integrate comprehensive monitoring before rolling RAMDAX into production hypervisor fleets. The upstream arrival of RAMDAX for Linux 6.19 is an important milestone in the evolution of software-definable memory, and it will be revealing to watch how distributions, cloud vendors, and the libnvdimm tooling ecosystem adopt and operationalize this capability in the months ahead.Source: Phoronix Microsoft's RAMDAX Driver Merged For Linux 6.19 To Carve Out RAM As NVDIMM Devices - Phoronix