Mounting a External SAN (FC)

ejdgr8

Member
Joined
Feb 23, 2026
Messages
5
We have a external SAN (FC) and a Hyper-V Infra (2 nodes) Windows Failover Cluster enabled and have a SCVMM to manage the host. I just want to know if what is the best practice in mounting a storage? via Failover Cluster as CSV or directly from SCVMM?
 
Hi ejdgr8,
Best practice: present the FC SAN LUNs to the Hyper‑V cluster nodes, validate them in Failover Cluster Manager, and for VM workloads add them as Cluster Shared Volumes (CSV). Use SCVMM to manage/classify/assign storage after the cluster sees it, not as the primary place to “mount” production VM storage.

Recommended design​

  1. Zone and mask the LUNs from the SAN to both Hyper‑V nodes.
  2. Ensure MPIO is installed/configured on both hosts.
  3. Bring the new disks online in Windows and initialize them once.
  4. Run Cluster Validation for storage.
  5. Add the disks to the Failover Cluster.
  6. Convert VM data LUNs to CSV.
  7. In SCVMM, rescan hosts/cluster, then classify and use the storage for placement.

Why CSV is usually the right choice​

For a 2-node Hyper‑V failover cluster, CSV is the normal best practice for VM disks because it gives you:
  • Live Migration and failover with shared access from all nodes
  • Simpler management for multiple VMs on shared storage
  • Better integration with clustered Hyper‑V workloads
  • Cleaner storage handling than mounting the same LUN manually per host
In practice:
  • Use CSV for VMs, VHDX files, checkpoints, and clustered workloads
  • Use traditional clustered disks only for special cases that require a dedicated clustered disk resource
  • Do not mount the same VM storage LUN as an ordinary standalone disk independently on each node outside cluster control

Where SCVMM fits​

SCVMM is mainly for management/orchestration, not the first layer that should own storage presentation.
Use SCVMM to:
  1. Discover the cluster storage
  2. Classify storage
  3. Assign storage to clouds/host groups
  4. Provision templates/VM placement
Do not rely on SCVMM as the only mechanism to make the storage available to the cluster if the disks are not already correctly presented at the SAN + Windows Failover Cluster layer.

Practical best practice​

If this storage is for Hyper‑V VM files​

Use:
SAN FC LUNs → both hosts → Failover Cluster → CSV → SCVMM management

If this storage is for non-VM special workloads​

You might use:
SAN FC LUNs → both hosts → Failover Cluster → clustered disk role
instead of CSV, depending on the application.

Important checks before adding storage​

  1. Confirm identical LUN presentation to both nodes.
  2. Confirm persistent reservations work correctly.
  3. Confirm MPIO vendor policy/DSM is installed if required by your SAN vendor.
  4. Keep one purpose per LUN where practical.
  5. Use GPT, not MBR, for modern larger shared disks.
  6. Format with NTFS or ReFS based on your backup/vendor guidance and workload needs.
  7. Run Validate Cluster before putting the disk in production.

Simple rule of thumb​

  • For Hyper‑V shared VM storage: CSV
  • For management after storage is visible to cluster: SCVMM
  • Not recommended: treating SCVMM as the place to directly mount production FC storage instead of clustering it first

My recommendation for your setup​

With:
  • External SAN (FC)
  • 2 Hyper‑V nodes
  • Windows Failover Cluster
  • SCVMM
the best practice is:
  1. Present the SAN LUNs to both nodes
  2. Add them to Failover Cluster
  3. Convert to CSV
  4. Let SCVMM discover and manage them afterward
If you want, I can give you a step-by-step procedure for:
  • Failover Cluster Manager method, or
  • SCVMM method after SAN presentation,
including exactly where to click and what to avoid.
 
If you use CSV through the cluster, you can live migrate VMs and patch hosts with zero downtime. Mounting storage directly to hosts via SCVMM ties workloads to specific nodes and makes failover and maintenance more manual. CSV just keeps everything smoother and less risky long term.
 
Yes — that’s the key architectural advantage.
With CSV (Cluster Shared Volumes) in a Hyper‑V failover cluster, all nodes can access the same storage concurrently, which is what enables live migration, host patching, and planned maintenance with little or no VM downtime. That’s why CSV is generally the cleaner long‑term design for clustered Hyper‑V environments.
By contrast, when storage is presented or managed in a way that’s effectively tied to individual hosts through SCVMM workflows, you lose some of that flexibility:
  • workloads become more node-dependent
  • failover/migration workflows get more manual
  • maintenance windows are harder to handle cleanly
  • operational risk goes up because storage layout is less cluster-native
So your summary is exactly right in practical terms:
  • CSV = shared, cluster-aware, smoother mobility
  • host-tied storage = more manual work, more operational friction
For long-term manageability, especially if you want painless patching and minimal disruption, CSV is usually the better design choice.