Microsoft has quietly closed a long‑standing operational gap for datacenter teams: Windows Server failover clusters can now host Storage Spaces Direct (S2D)–backed CSVs and traditional SAN‑LUN CSVs side‑by‑side, giving organizations a supported path to modernize without abandoning existing SAN investments.
For decades Windows Server Failover Clustering (WSFC) has been the backbone of Microsoft‑centric high‑availability storage and virtualization. The platform historically allowed clusters to present externally attached block storage (SAN via Fibre Channel or iSCSI) as Cluster Shared Volumes (CSVs), and since Windows Server 2016 it has also supported Storage Spaces Direct — Microsoft’s hyperconverged, software‑defined storage architecture that pools local disks and exposes volumes (usually formatted with ReFS) as CSVs for Hyper‑V and other clustered roles. Until recently, enterprises with large SAN estates faced a tradeoff: either rip‑and‑replace SANs to move to a full S2D hyperconverged architecture or maintain two distinct clusters and storage silos. Microsoft’s updated failover clustering guidance formalizes a third option — a mixed topology in which S2D (DAS) and SAN (FC/iSCSI) CSVs coexist within the same WSFC. This change was driven by customer demand to reuse SAN assets during phased S2D migrations and to allow flexible workload placement and migration strategies.
That said, mixed topologies increase operational complexity and demand disciplined automation, careful testing, and strict adherence to Microsoft’s documented constraints (no SAN disks in S2D pools; correct filesystem formatting; separate fault domains). For datacenter teams that enforce those guardrails, the mixed topology offers a practical migration path and an efficient way to balance innovation with investment protection.
Conclusion
This is an important, pragmatic step for Windows Server customers: they no longer must choose between SAN or S2D up front. Instead, they can design hybrid storage strategies that leverage the strengths of both architectures while running them safely under the umbrella of a single failover cluster — provided they follow Microsoft’s rules and invest in the validation and automation that keep mixed environments manageable and supportable.
Source: Neowin Failover Clusters in two versions of Windows Server get S2D and SAN coexistence
Background
For decades Windows Server Failover Clustering (WSFC) has been the backbone of Microsoft‑centric high‑availability storage and virtualization. The platform historically allowed clusters to present externally attached block storage (SAN via Fibre Channel or iSCSI) as Cluster Shared Volumes (CSVs), and since Windows Server 2016 it has also supported Storage Spaces Direct — Microsoft’s hyperconverged, software‑defined storage architecture that pools local disks and exposes volumes (usually formatted with ReFS) as CSVs for Hyper‑V and other clustered roles. Until recently, enterprises with large SAN estates faced a tradeoff: either rip‑and‑replace SANs to move to a full S2D hyperconverged architecture or maintain two distinct clusters and storage silos. Microsoft’s updated failover clustering guidance formalizes a third option — a mixed topology in which S2D (DAS) and SAN (FC/iSCSI) CSVs coexist within the same WSFC. This change was driven by customer demand to reuse SAN assets during phased S2D migrations and to allow flexible workload placement and migration strategies. Overview: what Microsoft now supports
What “coexistence” actually means
- You can run ReFS‑formatted S2D volumes and NTFS‑formatted SAN LUNs as CSVs on the same failover cluster.
- The two storage domains operate side‑by‑side but remain logically separate; SAN LUNs are never added to an S2D pool and S2D cannot consume SAN‑presented disks.
Supported Windows Server releases
The guidance applies to modern server releases — specifically Windows Server 2022 and Windows Server 2025 (and Azure Local builds aligned to those releases). Microsoft’s documentation and Summit sessions make clear that the mixed topology has been supported in documentation since the Windows Server 2022 family and is carried forward into Windows Server 2025.Why this matters — business and technical benefits
Combining S2D and SAN CSVs in a single WSFC delivers practical benefits for real‑world datacenters:- Preserve past investments. Organizations can keep high‑capacity SANs for cold tiers, backups, and legacy applications while gradually adopting S2D for performance‑sensitive workloads.
- Flexible migration paths. Use Hyper‑V Storage Live Migration (Move Virtual Machine Storage) to shift VMs and VHDXs between SAN CSVs and S2D CSVs with minimal or no downtime. This enables phased migrations and risk‑mitigated transitions.
- Hybrid tiering within a cluster. Place latency‑sensitive VMs or AI/ML workloads on S2D (ReFS) volumes, while leaving large archival or vendor‑managed workloads on SAN (NTFS) CSVs.
- Disaster recovery and ransomware strategy. SAN arrays with integrated snapshot and replication capabilities can become backup targets for VMs running on S2D, improving recovery options and giving defenders additional immutable/snapshotted tiers.
- Operational continuity. Admins can retain existing SAN management workflows, vendor tooling and SLAs while introducing S2D benefits such as local NVMe caching, ReFS optimizations, or nested resiliency.
Technical realities and rules you must follow
The coexistence model is intentionally prescriptive to avoid catastrophic misconfigurations. Key technical constraints are:- SAN disks may not be added to S2D storage pools. SAN LUNs are managed separately as CSVs; they must remain outside S2D pools. Breaking this separation risks data corruption and unsupported states.
- S2D remains DAS‑only. S2D is a hyperconverged pooling technology and must use locally attached disks in cluster nodes (including JBOD/NVMe/SSD/HDD) — it’s not designed to pool SAN‑presented devices.
- File system rules:
- Format S2D volumes with ReFS before converting them to CSVs.
- Format SAN volumes with NTFS before converting them to CSVs.
These formats correspond to workload expectations and the S2D/CSV code paths Microsoft optimizes. - Supported connectivity for SAN CSVs includes Fibre Channel, iSCSI, and Microsoft‑supported iSCSI target implementations. Multipath and fabric resilience remain the administrator’s responsibility.
- Separate fault domains. S2D relies on node‑level replication and rebuilds; SAN availability depends on array controllers, fabrics, and multipath. Design your HA/DR to respect these domains.
- Node and cluster limits. Microsoft documents S2D cluster sizing and limits (S2D: 1–16 nodes per S2D storage cluster; SAN/disaggregated compute clusters follow separate scaling guidance). Always check the current Windows Server documentation for exact limits tied to the functional level you plan to run.
Migration patterns: practical options and recommended steps
The most common operational scenario is migrating virtual machines between SAN CSVs and S2D CSVs inside the same cluster. Microsoft recommends using Hyper‑V’s Storage Live Migration (Move Virtual Machine Storage / Move‑VMStorage) to perform these moves non‑disruptively. The storage live migration process copies VHDXs and meta files while the VM keeps running, rings changes to the destination, and then performs the final cutover. Practical migration checklist (high‑level):- Validate cluster health and backups. Ensure cluster validation tests pass and you have verified backups or snapshots for all VMs to be moved.
- Confirm destination CSV formatting. Format S2D volumes as ReFS and SAN CSVs as NTFS before migrating VMs to them.
- Verify storage paths and permissions. Confirm the cluster nodes have access to the destination CSV and that ACLs and SMB/NTFS permissions are correct.
- Use Move Virtual Machine Storage from Failover Cluster Manager or PowerShell:
- GUI: Failover Cluster Manager → Virtual Machines → right‑click VM → Move → Virtual Machine Storage.
- PowerShell example: Move‑VMStorage ‑VMName "MyVM" ‑DestinationStoragePath "C:\ClusterStorage\CSV2\MyVM\".
- Monitor I/O and latency. Storage migration creates extra network and storage load; schedule during off‑peak times or throttle simultaneous migrations.
- Validate VM functionality and performance post‑move. Check event logs, storage counters, and application behavior.
Strengths: why enterprise IT should welcome this change
- Choice and non‑disruptive modernization. Organizations no longer have to choose between keeping SANs or adopting S2D; they can do both and evolve workloads on their schedule.
- Optimized placement. S2D’s local NVMe and ReFS strengths suit latency‑sensitive and high‑IOPS workloads; SANs still excel for large persistent stores, long retention backups, or workloads tied to array‑level features.
- Improved migration economics. The mixed cluster lowers the cost of migration by allowing reuse of SAN capacity as temporary or long‑term targets during transitions.
- Operational continuity for vendor ecosystems. Storage teams can keep vendor toolsets, health monitoring, and service contracts for SANs while server teams use modern S2D tooling.
Risks and caveats — what can go wrong
No architectural shift is risk‑free. Here are the main hazards to plan for:- Misconfiguration risk. Accidentally adding SAN disks to the S2D pool (or vice versa) is an unsupported and dangerous state. Strict procedural controls and automation guards are essential.
- Performance interference. Heavy SAN I/O could congest network fabrics or cause contention that affects S2D replication traffic (east‑west). Design separate physical paths and QoS for S2D replication vs SAN access.
- Operational complexity. Two storage models in one cluster increases operational surface area. Teams must manage firmware, drivers, pathing, and update cadences for both storage stacks.
- Feature footprint differences. Some features behave differently on ReFS vs NTFS (cloning, dedupe, certain NTFS attributes). Validate application compatibility for the chosen filesystem.
- Upgrade and functional level considerations. Cluster and S2D storage pool versions are tied to cluster functional levels; rolling upgrades require careful sequencing. Community reports indicate some edge cases during in‑place functional upgrades and storage pool version transitions — test your exact upgrade path.
- Support boundaries. The coexistence guidance is prescriptive; straying outside Microsoft’s documented patterns risks stepping into unsupported configurations. Follow the Windows Server documentation and vendor guidance.
Operational best practices and hardening
To reduce risk and get the most from a mixed SAN+S2D cluster, adopt these best practices:- Strict separation in provisioning scripts. Automate checks so SAN LUNs are never fed into Storage Spaces Direct pools (validate WWNs/LUN IDs).
- Filesystem enforcement. Enforce ReFS on S2D volumes and NTFS on SAN CSVs via provisioning hooks and preflight checks.
- Network isolation and QoS. Provide dedicated networks or VLANs for S2D replication and RDMA traffic; isolate SAN traffic (Fibre Channel or iSCSI) from S2D east‑west replication to reduce contention.
- Firmware and driver discipline. Maintain vendor‑certified firmware and driver levels for both HBAs and NVMe/SSD stacks. Consistent versions reduce flaky failures during cluster failovers or rebuilds.
- Test live migrations at scale. Don’t assume Move‑VMStorage will behave identically in every environment — run scale tests to evaluate time‑to‑move, transient IOPS spikes, and VM consistency before mass migration.
- Monitoring and observability. Instrument both storage subsystems (array telemetry, multipath metrics, S2D health, CSV owner node metrics) and use synthetic IO tests to detect early signs of contention.
- Patch and update runbooks. Maintain clear sequences for cluster updates, S2D pool version updates, and SAN microcode updates; verify vendor compatibility before rolling patches.
Real‑world signals and vendor momentum
Microsoft’s own learning docs and Windows Server Summit sessions outline and demonstrate the pattern; partner ecosystems are reacting too. Storage vendors are working to validate SAN attach patterns for Azure Local and Windows Server mixed topologies, reflecting customer demand for a staged approach to modernization. These vendor signals indicate the architecture is more than theory — it’s being operationalized in co‑engineered programs and previews. At the same time, community threads and field reports highlight the need for caution: cluster functional level updates, storage pool version changes, and specific topology nuances can require extra testing, and some customers have reported issues during certain upgrade scenarios. These community experiences reinforce that lab validation and a staged rollout are non‑negotiable.A practical migration playbook (step‑by‑step)
Below is a compact, sequential approach teams can use as a starting template for migrating VMs into or out of S2D when operating a mixed cluster.- Inventory and classification:
- Catalog VMs by I/O profile, latency sensitivity, and filesystem needs (NTFS features vs ReFS advantages).
- Identify SAN LUNs and S2D volumes and map ownership and current CSV placement.
- Lab validation:
- Reproduce a subset of VMs in a test cluster with identical SAN and S2D topologies, simulate failures and live migrations.
- Prepare destination CSV:
- Ensure S2D volumes are ReFS and SAN CSVs are NTFS.
- Verify free capacity and performance headroom.
- Backup and snapshot:
- Take application‑consistent backups or array snapshots as rollback insurance.
- Execute small‑scale pilot migrations:
- Use Move Virtual Machine Storage or Move‑VMStorage for a handful of VMs.
- Monitor replication, storage latency, and VM performance.
- Scale and automate:
- Use PowerShell or System Center orchestration to process batches with throttling.
- Post‑migration validation:
- Confirm VM health, application metrics, and long‑term stability.
- Operationalize:
- Update runbooks, change management records, and monitoring dashboards for mixed clusters.
What to watch for next (roadmap and unanswered questions)
Microsoft’s documentation and summit content make coexistence an official and supported pattern, but the ecosystem is still evolving:- Watch for vendor‑specific best practices and validated configurations (HBA firmware, SAN array drivers).
- Expect more prescriptive guidance from Microsoft and partners on performance sizing when SANs and S2D share the same cluster.
- Keep an eye on Azure Local and third‑party vendor announcements: hybrid integrations and validated SAN attach programs are already emerging.
Bottom line
The ability to run Storage Spaces Direct CSVs and SAN‑LUN CSVs side‑by‑side in the same Windows Server failover cluster is a pragmatic, customer‑driven enhancement that removes a major blocker for organizations planning gradual modernization. It delivers choice — letting teams combine the low‑latency, hyperconverged advantages of S2D with the scale and enterprise features of existing SAN arrays.That said, mixed topologies increase operational complexity and demand disciplined automation, careful testing, and strict adherence to Microsoft’s documented constraints (no SAN disks in S2D pools; correct filesystem formatting; separate fault domains). For datacenter teams that enforce those guardrails, the mixed topology offers a practical migration path and an efficient way to balance innovation with investment protection.
Conclusion
This is an important, pragmatic step for Windows Server customers: they no longer must choose between SAN or S2D up front. Instead, they can design hybrid storage strategies that leverage the strengths of both architectures while running them safely under the umbrella of a single failover cluster — provided they follow Microsoft’s rules and invest in the validation and automation that keep mixed environments manageable and supportable.
Source: Neowin Failover Clusters in two versions of Windows Server get S2D and SAN coexistence
- Joined
- Mar 14, 2023
- Messages
- 97,152
- Thread Author
-
- #2
Microsoft has officially authorized a supported mixed storage topology that lets Storage Spaces Direct (S2D) and traditional SAN storage operate side‑by‑side inside the same Windows Server Failover Cluster (WSFC), giving enterprises a practical bridge between existing SAN investments and modern hyperconverged performance. This capability—documented by Microsoft and amplified in industry coverage—applies to Windows Server 2022 and is carried forward into Windows Server 2025, and it enforces strict separation between the two storage domains while enabling administrators to place workloads where they perform best.
Storage strategy in enterprise datacenters has long been a tradeoff. Traditional SANs deliver mature enterprise features—snapshots, array replication, centralized management and predictable operational models—while Storage Spaces Direct (S2D) delivers cost‑efficient, low‑latency, node‑local pooling across NVMe/SSD/HDD with ReFS optimizations that suit high‑IOPS modern workloads. Historically, these two approaches required distinct clusters or a rip‑and‑replace approach for migration to hyperconverged infrastructure. Microsoft’s mixed topology formalizes a third option: run both on one WSFC and use each technology for the workloads it fits best. The net result: fewer forklift upgrades, more gradual migrations, and the ability to keep SAN‑based workflows (and SLAs) while adopting S2D for latency‑sensitive workloads such as large VM farms, SQL/DB workloads, container layers, and AI/ML nodes. This is a pragmatic, brownfield‑friendly approach that lowers the barrier for organizations wanting to modernize storage without discarding existing investments.
Source: Petri IT Knowledgebase Windows Server Failover Clusters Get S2D and SAN Coexistence Support
Background: why this matters now
Storage strategy in enterprise datacenters has long been a tradeoff. Traditional SANs deliver mature enterprise features—snapshots, array replication, centralized management and predictable operational models—while Storage Spaces Direct (S2D) delivers cost‑efficient, low‑latency, node‑local pooling across NVMe/SSD/HDD with ReFS optimizations that suit high‑IOPS modern workloads. Historically, these two approaches required distinct clusters or a rip‑and‑replace approach for migration to hyperconverged infrastructure. Microsoft’s mixed topology formalizes a third option: run both on one WSFC and use each technology for the workloads it fits best. The net result: fewer forklift upgrades, more gradual migrations, and the ability to keep SAN‑based workflows (and SLAs) while adopting S2D for latency‑sensitive workloads such as large VM farms, SQL/DB workloads, container layers, and AI/ML nodes. This is a pragmatic, brownfield‑friendly approach that lowers the barrier for organizations wanting to modernize storage without discarding existing investments.Overview: what Microsoft now supports
Microsoft’s documentation defines a “hyperconverged with SAN storage” model: S2D‑backed volumes (formatted as ReFS and exposed as CSVs) coexist with SAN‑LUN–backed CSVs (formatted as NTFS) on the same cluster. The two storage domains operate side‑by‑side but remain logically separate—SAN LUNs are not part of the S2D pool and must never be added to it. Supported SAN connectivity includes Fibre Channel, iSCSI, and supported iSCSI targets. The guidance applies to Windows Server 2022 and Windows Server 2025 (and corresponding Azure Local builds). Key platform rules enforced by Microsoft:- S2D is DAS‑only: S2D will only aggregate locally attached drives discovered by cluster nodes; it does not and must not claim SAN LUNs.
- Filesystem separation: S2D volumes should be formatted with ReFS (CSVFS_ReFS); SAN volumes that will be CSVs must be formatted NTFS before conversion to CSV. This aligns with how S2D and CSV code paths are optimized.
- Strict operational separation: SAN disks cannot be added to the S2D storage pool; administrators must maintain independent management and observability for each storage subsystem.
Technical fundamentals: S2D vs SAN (concise)
What Storage Spaces Direct (S2D) gives you
- Software‑defined pooling of local disks across cluster nodes.
- Native resiliency (mirroring, nested resiliency, erasure coding) and automatic rebuilds across nodes.
- Strong performance when paired with NVMe and RDMA networks (SMB Direct), and default ReFS optimizations for large virtualization workloads.
What SAN storage gives you
- Centralized, vendor‑managed arrays with mature features: snapshots, replication, deduplication, encryption and advanced telemetry.
- Predictable operational model supported by vendor tooling and enterprise SLAs, especially for archival, long‑term retention, and regulated datasets.
Why mix them?
A mixed cluster lets you:- Place latency sensitive VMs and high‑IOPS workloads on S2D/ReFS volumes.
- Keep large archival workloads, compliance‑sensitive data, or array‑managed features on SAN/NTFS volumes.
- Migrate VMs progressively using storage live migration tools rather than performing a forklift migration.
Important configuration guidelines and hard constraints
Administrators must follow precise rules to keep the cluster supported and reliable. These are the operational must‑dos:- Never add SAN‑presented disks to an S2D storage pool. Mixing physical origins in a pool is unsupported and can cause unrecoverable states.
- Format S2D volumes as ReFS (CSVFS_ReFS) before converting to CSV. This ensures S2D’s performance and resiliency features are available.
- Format SAN volumes as NTFS prior to converting them to CSVs. NTFS CSVs use different code paths and semantics than ReFS CSVs.
- Keep firmware, drivers, and vendor software versions consistent and validated against Microsoft’s compatibility lists and vendor guidance. Multipath I/O (MPIO) needs to be correctly configured for SAN paths.
- Segment and protect network traffic: dedicate physical/network paths or VLAN/QoS for S2D east‑west replication (SMB Direct / RDMA if used) and separate SAN traffic to avoid contention.
Practical migration scenarios and a recommended playbook
The mixed topology shines when used as a migration tool or tiered‑storage strategy. Below is a recommended phased playbook—execute it in a lab and iterate.- Inventory and classify VMs and data by I/O profile, latency sensitivity, and filesystem requirements. Tag workloads that need ReFS features vs those that depend on SAN features.
- Validate hardware and firmware versions; obtain Microsoft‑validated compatibility matrices for host SKUs, HBAs, SAN arrays and MPIO drivers. Confirm support for your OS build (Windows Server 2022 or 2025).
- Build a test cluster that mirrors the production PHY/NIC, RDMA, and SAN zoning to verify failovers, live migrations, and stress behavior. Run Test‑Cluster and S2D validation tests.
- Prepare destination CSVs: create S2D volumes formatted as CSVFS_ReFS and SAN LUNs formatted as NTFS, then add them to the cluster as CSVs.
- Backup and snapshot before migration. Use array snapshots for SAN‑resident VMs and application‑consistent backups for S2D residents to ensure rollback options.
- Pilot migrations: use Move‑VMStorage or Move‑VirtualMachineStorage for Hyper‑V workloads; monitor IOPS, CPU, and CSV latency during test runs. Validate application behavior.
- Scale migrations in controlled batches, automate via PowerShell or orchestration tools, and maintain throttling to avoid saturating fabrics.
Network design and performance considerations
S2D is network‑sensitive. Latency and throughput directly affect rebuild times and VM performance.- Use dedicated low‑latency networks for S2D replication. Microsoft recommends 10+ GbE with RDMA (RoCE or iWARP) when available—enabling SMB Direct greatly reduces CPU overhead.
- Isolate SAN fabric traffic (Fibre Channel or iSCSI) so heavy SAN I/O cannot interfere with S2D east‑west traffic. Plan NIC/VLANs and QoS accordingly.
- Monitor SMB, NIC, and CSV latency counters closely. Synthetic IO testing during pilot migration will expose contention windows.
Observability, monitoring and runbooks
Mixed clusters increase operational surface area. Treat monitoring as a first‑class requirement.- Monitor each storage domain with vendor tools (array telemetry, snapshot/replication health) and Windows S2D health counters (resiliency, pool usage, rebuild progress). Correlate alerts across systems.
- Instrument cluster CSV owner node metrics and SMB performance counters to detect cross‑domain contention. Automate synthetic IO tests during scheduled windows.
- Build provisioning guards: preflight checks that enforce filesystem type (ReFS for S2D, NTFS for SAN CSVs) and prevent SAN LUNs from being added to S2D pools via automation.
Backup, snapshots and ransomware strategies
A mixed cluster offers opportunities for layered protection:- Use SAN‑array snapshots/replication as an immutable/array‑level recovery tier for VMs that were migrated from S2D to SAN CSVs or that are persistently kept on SAN. This can provide very fast RTOs and array‑native replication capabilities.
- For S2D volumes (ReFS), integrate Microsoft and third‑party backup tooling that understands CSVs and ReFS behaviors; keep backups off‑cluster to remove single‑location risk.
- Plan ransomware responses that include isolated snapshot retention and tested recovery playbooks across both storage domains; mixing technologies helps diversify recovery targets.
Vendor ecosystem, Azure Local and partner activity
The mixed topology has triggered partner responses: storage vendors and cloud‑adjacent offerings (Azure Local) are publishing integration patterns and previews showing SAN attach and validated topologies. These vendor announcements indicate the approach is being operationalized, but the authoritative support boundary remains Microsoft’s compatibility and validation documentation—request validated compatibility matrices from vendors before procurement. Enterprises should treat vendor press as signals of momentum but base production acceptance on joint vendor‑Microsoft validation certificates that list firmware/driver/host combos. This prevents finger‑pointing in complex post‑migration incidents.Strengths, risks, and what to watch for (critical analysis)
Strengths- Practical modernization path: Enables progressive adoption of S2D while preserving SAN investments and vendor workflows.
- Workload optimization: Place high‑IOPS workloads on S2D/ReFS and retention/compliance workloads on SAN/NTFS to balance performance and operational needs.
- Cost and operational flexibility: Potential CAPEX/OPEX savings by reusing SAN capacity and rightsizing hyperconverged nodes for compute.
- Misconfiguration hazards: Accidentally adding SAN disks to S2D pools or mixing filesystems creates unsupported states and potential data loss. Strict automation and provisioning checks are mandatory.
- Performance interference: SAN I/O can congest fabrics and affect S2D replication; network isolation and QoS are non‑optional.
- Operational complexity: Two storage models require cross‑team coordination, synchronized firmware lifecycles, and more sophisticated monitoring.
- Upgrade and functional level pitfalls: Rolling OS or cluster functional upgrades have corner cases; always validate upgrade paths and test rollbacks. Community reports note edge cases during certain in‑place upgrades.
- Any marketing statements promising specific reduction ratios, universal performance gains, or “turnkey” migration speeds should be treated cautiously until proven in a representative PoC. Ask vendors for dataset‑specific validation and contractual guarantees where possible.
Recommended checklist for planning and execution
- Obtain Microsoft’s current documentation for “Hyperconverged with SAN storage” and read the formatting and separation rules.
- Collect validated hardware/firmware matrices from storage vendors and host OEMs.
- Build a test cluster and run validation tests (Test‑Cluster, S2D validation).
- Automate provisioning guards to enforce filesystem types and prevent SAN disks entering S2D pools.
- Design isolated network fabrics for S2D replication and SAN traffic; include QoS and RDMA where appropriate.
- Pilot migrations with representative workloads, instrumenting CPU, SMB/CSV latency, and array telemetry.
- Update runbooks, monitoring, and backup/DR procedures to manage both storage domains.
Conclusion
Microsoft’s decision to support S2D and SAN coexistence inside a single Windows Server Failover Cluster removes a major operational and financial blocker for many enterprises. When applied with discipline—strict filesystem separation, careful network design, validated hardware matrices, and a staged migration plan—the mixed topology delivers a pragmatic path to modernize without abandoning mission‑critical SAN investments. The feature is a powerful tool for tiered storage strategies, phased migrations, and hybrid cloud patterns, but it increases operational surface area and requires rigorous validation, automation, and cross‑team discipline to avoid unsupported configurations. For datacenter architects, the immediate next steps are clear: validate with vendors, pilot in an isolated lab, harden provisioning automations, and update runbooks to manage the new mixed reality of storage on Windows Server.Source: Petri IT Knowledgebase Windows Server Failover Clusters Get S2D and SAN Coexistence Support
Similar threads
- Featured
- Article
- Replies
- 0
- Views
- 21
- Featured
- Article
- Replies
- 2
- Views
- 64
- Featured
- Article
- Replies
- 0
- Views
- 27
- Article
- Replies
- 0
- Views
- 28
- Featured
- Article
- Replies
- 0
- Views
- 413