Given what you’ve already proven:
- Server boots and runs fine with the LSI card removed. stalled but the backplane/SAS cables unplugged, the server starts good**.
- As soon as the dred again, things go sideways and the drives show Unconfigured.
That strongly indicates the boot failure
the RAID side (a bad drive, bad slot/backplane, bad SAS cable, or power draw at drive spin-up) — not Windows Server 2019 itself.
Below is the fastest way to get to a definite answer.
1) Confirm which LSI device you’re actually configuring (very important)
The screen you posted earlier is
“LSI Software RAID Configuration Utility Ver A.60 … BIOS Version M1068e…” (SAS1068E-era). That is commonly an
onboard LSI SAS controller on some Supermicro X8 boards, and it’s
not the same interface as a MegaRAID 9260/LSI00198 card.
Action:
- In the Supermicro BIOS, look for Onboard LSI SASmporarily Disable* it (or at least disable its boot ROM), if present*.
- Make sure you’re entering the MegaRAID WebBIOS for the add-in RAID card (usually a different prompt/key combo at POST).
If you’re accidentally configuring the onboard SAS controller while the RAID is on a different card, everything will look “wrong” (like all disks being “unconfigured”).
2) Make Windows boot even with the RAID card installed (boot order / boot support)
Since your OS is on motherboard SATA (per your earlier post), Windows should be able to boot even if the RAID is broken—
as long as the system isn’t trying to boot from the LSI.
In motherboard BIOS
- Set Hard Disk Priority / Boot Priority so the motherboard SATA SSD is first.
- Disable boot from “SCSI/LSI” if there’s an option.
In LSI/MegaRAID BIOS (if you can get in)
- Set the controller Boot Support = Disabled (or “BIOS boot support: off”) if you do not boot the OS from that RAID.
This prevents the RAID controller from taking over the boot path.
3) The most effective test: find the one drive/slot/cable that’s crashing the system
Because your machine only fails when the drive side is connected, do this:
- Power off.
- Leave the LSI card installed and SAS cable connected to the backplane.
- Pull ALL RAID drives out of the chassis/backplane.
- Boot:
- Insert ONE drive only, boot again.
- Repeat adding one drive at a time until the server starts power-cycling / won’t boot.
Result:
- The drive (or bay) that triggers the cycling is very likely failed (or that slot/backplane lane is bad).
- If it fails even with zero drives inserted, suspect SAS cable/backplane or the controller.
This test is the quickest way to stop guessing.
4) Don’t worry about “Unconfigured” yet (it’s a symptom)
If the controller is losing its mind during scan or cannot read metadata reliably (bad drive/cable/backplane), it may present disks as
Unconfigured.
Once the hardware is stable, you can decide:
- If you want the old array: look for FImport (do not Initialize).
- If data is not critical (you said it isn’t): you can simply wipe/recreate the array after you identify/replace the bad part.
5) If the system is “cycling off” specifically at drive spin-up: check power
Older Supermicro + multiple HDDs + RAID card can exceed PSU capability during spin-up.
Quick checks:
- Any recent PSU change? Any splitters?
- Try booting with only 1–2 RAID drives inserted (after you confirm boot stability).
- If it becomes stable with fewer drives, PSU/spin-up load is likely.
2 questions that let me give exact next steps
- When it “cycles off” with the LSI + drives connected: does it happen before you see the Windows logo (during POST), or after Windows starts loading?
- Does your Supermicro X8ST3-F BIOS show an option for Onboard LSI SAS (enabled/disabled)?
If you answer those, I’ll tell you exactly what to disable (onboard LSI option ROM vs MegaRAID boot support vs pure hardware isolation).