Thanks — that error (0x8007042B‑0x2000D, “Failed on the SAFE_OS phase — Migrate Data”) means setup is failing during the offline portion where it migrates data, drivers and services before the final commit. That phase commonly trips over open handles, incompatible drivers/services, encryption/volume issues or role-specific blockers (AD/DC, Hyper‑V, etc.). Below is a focused troubleshooting plan you can run through (with safe fallbacks if you prefer a different approach).
Why this fails (short list)
- Third‑party drivers or agents (backup, AV, storage filter drivers) blocking file/service migration. This is a very common cause.
- Corrupt component store or servicing/commit problems that prevent post‑reboot commit. Running a repair/repair‑upgrade is often required.
- Encrypted / locked volumes (BitLocker, iSCSI mounted LUNs, dynamic disks) or insufficient free space on C: or on EFI/Reserved partitions. These block the SAFE_OS / Migrate Data step.
- Domain controller / AD DS role or other server roles that must be handled specially — in‑place upgrades of DCs are riskier and have special guidance.
What to collect first (logs — these tell the root cause)
- After the failed attempt, zip and attach these files (or paste relevant lines):
- C:\$WINDOWS.~BT\Sources\Panther\setuperr.log
- C:\$WINDOWS.~BT\Sources\Panther\setupact.log
- C:\Windows\Panther\setupact.log (if present)
- Any files under C:\$WINDOWS.~BT\Sources\Panther\UserData\ (miglog.xml / Tend to show which component failed)
- If available, C:\Windows\Logs\CBS\cbs.log (last 200 lines) and C:\Windows\Logs\DISM\dism.log
Quick commands to extract the most useful lines
- Open elevated cmd and run:
- findstr /i "0x8007042B 0x2000D Migrate" C:\$WINDOWS.~BT\Sources\Panther*.log
- findstr /i "error fail" C:\$WINDOWS.~BT\Sources\Panther*.log
- (If you want to copy last lines) powershell -Command "Get-Content C:\Windows\Logs\CBS\cbs.log -Tail 200"
Immediate, reversible triage (try these before re-running setup)
- Suspend BitLocker (if enabled):
- manage-bde -protectors -disable C: (re-enable after upgrade).
- Disconnect any nonessential disks/USB drives and unmount any iSCSI volumes.
- Uninstall or fully disable third‑party AV, backup agents, security/endpoint tools and any storage filter drivers (eg ReFS filter, vendor iSCSI drivers). Reboot.
- Stop or uninstall non‑Microsoft services you can do without during the upgrade (backup services, third‑party replication).
- Free space: make sure C: has lots of free space (20–30 GB min) and check System Reserved/EFI free space if UEFI (can cause related errors).
- Run the checks you already ran again and add an offline source for DISM:
- sfc /scannow
- dism /online /cleanup-image /restorehealth /source:wim:X:\sources\install.wim:1 /limitaccess
(Mount your Server 2025 ISO and replace X: with the ISO drive letter). This repairs the component store from the source image.
Try an in‑place repair upgrade (recommended when DISM/SFC don’t fix it)
- Mount the Server 2025 ISO on the server, run Setup from the mounted image and choose “Keep personal files and apps” (standard in‑place repair/upgrade). That replaces system files and finishes servicing steps that otherwise choke during SAFE_OS. This approach is widely recommended when servicing/commit fails. Example of launching setup from the ISO is shown here.
If the server is a Domain Controller or holds FSMO roles
- Consider building a new Server 2025 VM/host and migrating roles (preferred long‑term). In‑place upgrades of DCs are supported but riskier; best practice often recommends introducing a new OS server and transferring roles rather than upgrading an existing DC in place. Backup system state first if you proceed with in‑place.
If you must preserve settings/roles/users (your stated goal)
Option A — In‑place repair upgrade (keeps roles/settings) — try after the triage steps above (suspend BitLocker, remove third‑party blockers, run DISM with ISO as source). This is the fastest route to preserve everything.
Option B — Safer/cleaner (recommended for DCs / critical servers)
- Build a new Server 2025 system (VM or physical), join it to the domain, promote as DC (if AD), transfer FSMO and other roles, migrate file/print/DFS/WSUS settings, then demote the old server. This keeps uptime and is less fragile than an in‑place upgrade. Many admins prefer this route for servers with important roles.
If you want me to look at the specific cause
- Attach the three log files I listed (setuperr.log, setupact.log, miglog.xml / Panther files). If you can paste the last ~40 lines from setuperr.log and the matching section from setupact.log where the 0x8007042B appears I’ll parse it and tell you the exact component/service/driver that failed and the next precise action.
Final notes / cautions
- Always take a full system backup and a system state backup (for AD) before an in‑place upgrade.
- If you hit repeated SAFE_OS Migrate Data failures pointing at a particular driver/service, remove that software before retrying. If the logs point to “Access denied” or locked files, the offending service will need to be stopped/uninstalled.
If you want, paste the setuperr.log and setupact.log excerpts here (or attach the full files) and I’ll read them and give the exact step to fix the blocker.