classic57b
Member
- Joined
- Sep 11, 2025
- Messages
- 7
- Thread Author
-
- #1
I am trying to do an "In-Place" upgrade from Server 2019 Standard to Server 2025 Standard. The upgrade fails with the following error 0x8007042B-0x2000D. "Failed on the SAFE_OS PHASE error during Migrate Data operation.
I have performed the following:
I have performed the following:
- Reset Windows Update and related services
- Press Windows + R, type services.msc, and press Enter.
- Find and right-click Windows Update, set its Startup type to Automatic, and click Start if it isn't running.
- Repeat this process for Background Intelligent Transfer Service and App Readiness.
- Run SFC and DISM
- sfc /scannow
- dism /online /cleanup-image /restorehealth
- dism /online /cleanup-image /startcomponentcleanup
- Clean Boot
- In-Place upgrade without updates
- Reset Windows Update and related services
Solution
Thanks — that error (0x8007042B‑0x2000D, “Failed on the SAFE_OS phase — Migrate Data”) means setup is failing during the offline portion where it migrates data, drivers and services before the final commit. That phase commonly trips over open handles, incompatible drivers/services, encryption/volume issues or role-specific blockers (AD/DC, Hyper‑V, etc.. Below is a focused troubleshooting plan you can run through (with safe fallbacks if you prefer a different approach).
Why this fails (short list)
Why this fails (short list)
- Third‑party drivers or agents (backup, AV, storage filter drivers) blocking file/service migration. This is a very common cause.
- Corrupt component store or servicing/commit problems that prevent post‑reboot commit. Running a...
- Joined
- Mar 14, 2023
- Messages
- 98,454
Thanks — that error (0x8007042B‑0x2000D, “Failed on the SAFE_OS phase — Migrate Data”) means setup is failing during the offline portion where it migrates data, drivers and services before the final commit. That phase commonly trips over open handles, incompatible drivers/services, encryption/volume issues or role-specific blockers (AD/DC, Hyper‑V, etc.. Below is a focused troubleshooting plan you can run through (with safe fallbacks if you prefer a different approach).
Why this fails (short list)
Option A — In‑place repair upgrade (keeps roles/settings) — try after the triage steps above (suspend BitLocker, remove third‑party blockers, run DISM with ISO as source). This is the fastest route to preserve everything.
Option B — Safer/cleaner (recommended for DCs / critical servers)
Why this fails (short list)
- Third‑party drivers or agents (backup, AV, storage filter drivers) blocking file/service migration. This is a very common cause.
- Corrupt component store or servicing/commit problems that prevent post‑reboot commit. Running a repair/repair‑upgrade is often required.
- Encrypted / locked volumes (BitLocker, iSCSI mounted LUNs, dynamic disks) or insufficient free space on C: or on EFI/Reserved partitions. These block the SAFE_OS / Migrate Data step.
- Domain controller / AD DS role or other server roles that must be handled specially — in‑place upgrades of DCs are riskier and have special guidance.
- After the failed attempt, zip and attach these files (or paste relevant lines):
- C:\$WINDOWS.~BT\Sources\Panther\setuperr.log
- C:\$WINDOWS.~BT\Sources\Panther\setupact.log
- C:\Windows\Panther\setupact.log (if present)
- Any files under C:\$WINDOWS.~BT\Sources\Panther\UserData\ (miglog.xml / Tend to show which component failed)
- If available, C:\Windows\Logs\CBS\cbs.log (last 200 lines) and C:\Windows\Logs\DISM\dism.log
- Open elevated cmd and run:
- findstr /i "0x8007042B 0x2000D Migrate" C:\$WINDOWS.~BT\Sources\Panther*.log
- findstr /i "error fail" C:\$WINDOWS.~BT\Sources\Panther*.log
- (If you want to copy last lines) powershell -Command "Get-Content C:\Windows\Logs\CBS\cbs.log -Tail 200"
- Suspend BitLocker (if enabled):
- manage-bde -protectors -disable C: (re-enable after upgrade).
- Disconnect any nonessential disks/USB drives and unmount any iSCSI volumes.
- Uninstall or fully disable third‑party AV, backup agents, security/endpoint tools and any storage filter drivers (eg ReFS filter, vendor iSCSI drivers). Reboot.
- Stop or uninstall non‑Microsoft services you can do without during the upgrade (backup services, third‑party replication).
- Free space: make sure C: has lots of free space (20–30 GB min) and check System Reserved/EFI free space if UEFI (can cause related errors).
- Run the checks you already ran again and add an offline source for DISM:
- sfc /scannow
- dism /online /cleanup-image /restorehealth /source:wim:X:\sources\install.wim:1 /limitaccess
(Mount your Server 2025 ISO and replace X: with the ISO drive letter). This repairs the component store from the source image.
- Mount the Server 2025 ISO on the server, run Setup from the mounted image and choose “Keep personal files and apps” (standard in‑place repair/upgrade). That replaces system files and finishes servicing steps that otherwise choke during SAFE_OS. This approach is widely recommended when servicing/commit fails. Example of launching setup from the ISO is shown here.
- Consider building a new Server 2025 VM/host and migrating roles (preferred long‑term). In‑place upgrades of DCs are supported but riskier; best practice often recommends introducing a new OS server and transferring roles rather than upgrading an existing DC in place. Backup system state first if you proceed with in‑place.
Option A — In‑place repair upgrade (keeps roles/settings) — try after the triage steps above (suspend BitLocker, remove third‑party blockers, run DISM with ISO as source). This is the fastest route to preserve everything.
Option B — Safer/cleaner (recommended for DCs / critical servers)
- Build a new Server 2025 system (VM or physical), join it to the domain, promote as DC (if AD), transfer FSMO and other roles, migrate file/print/DFS/WSUS settings, then demote the old server. This keeps uptime and is less fragile than an in‑place upgrade. Many admins prefer this route for servers with important roles.
- Attach the three log files I listed (setuperr.log, setupact.log, miglog.xml / Panther files). If you can paste the last ~40 lines from setuperr.log and the matching section from setupact.log where the 0x8007042B appears I’ll parse it and tell you the exact component/service/driver that failed and the next precise action.
- Always take a full system backup and a system state backup (for AD) before an in‑place upgrade.
- If you hit repeated SAFE_OS Migrate Data failures pointing at a particular driver/service, remove that software before retrying. If the logs point to “Access denied” or locked files, the offending service will need to be stopped/uninstalled.
Solution
classic57b
Member
- Joined
- Sep 11, 2025
- Messages
- 7
- Thread Author
-
- #3
classic57b
Member
- Joined
- Sep 11, 2025
- Messages
- 7
- Thread Author
-
- #5
classic57b
Member
- Joined
- Sep 11, 2025
- Messages
- 7
- Thread Author
-
- #7
Followed instructions and still no joy. Please see attached files as requested. Please help with exact cause of the failure.Thanks — that error (0x8007042B‑0x2000D, “Failed on the SAFE_OS phase — Migrate Data”) means setup is failing during the offline portion where it migrates data, drivers and services before the final commit. That phase commonly trips over open handles, incompatible drivers/services, encryption/volume issues or role-specific blockers (AD/DC, Hyper‑V, etc.). Below is a focused troubleshooting plan you can run through (with safe fallbacks if you prefer a different approach).
Why this fails (short list)
What to collect first (logs — these tell the root cause)
- Third‑party drivers or agents (backup, AV, storage filter drivers) blocking file/service migration. This is a very common cause.
- Corrupt component store or servicing/commit problems that prevent post‑reboot commit. Running a repair/repair‑upgrade is often required.
- Encrypted / locked volumes (BitLocker, iSCSI mounted LUNs, dynamic disks) or insufficient free space on C: or on EFI/Reserved partitions. These block the SAFE_OS / Migrate Data step.
- Domain controller / AD DS role or other server roles that must be handled specially — in‑place upgrades of DCs are riskier and have special guidance.
Quick commands to extract the most useful lines
- After the failed attempt, zip and attach these files (or paste relevant lines):
- C:\$WINDOWS.~BT\Sources\Panther\setuperr.log
- C:\$WINDOWS.~BT\Sources\Panther\setupact.log
- C:\Windows\Panther\setupact.log (if present)
- Any files under C:\$WINDOWS.~BT\Sources\Panther\UserData\ (miglog.xml / Tend to show which component failed)
- If available, C:\Windows\Logs\CBS\cbs.log (last 200 lines) and C:\Windows\Logs\DISM\dism.log
Immediate, reversible triage (try these before re-running setup)
- Open elevated cmd and run:
- findstr /i "0x8007042B 0x2000D Migrate" C:\$WINDOWS.~BT\Sources\Panther*.log
- findstr /i "error fail" C:\$WINDOWS.~BT\Sources\Panther*.log
- (If you want to copy last lines) powershell -Command "Get-Content C:\Windows\Logs\CBS\cbs.log -Tail 200"
Try an in‑place repair upgrade (recommended when DISM/SFC don’t fix it)
- Suspend BitLocker (if enabled):
- manage-bde -protectors -disable C: (re-enable after upgrade).
- Disconnect any nonessential disks/USB drives and unmount any iSCSI volumes.
- Uninstall or fully disable third‑party AV, backup agents, security/endpoint tools and any storage filter drivers (eg ReFS filter, vendor iSCSI drivers). Reboot.
- Stop or uninstall non‑Microsoft services you can do without during the upgrade (backup services, third‑party replication).
- Free space: make sure C: has lots of free space (20–30 GB min) and check System Reserved/EFI free space if UEFI (can cause related errors).
- Run the checks you already ran again and add an offline source for DISM:
- sfc /scannow
- dism /online /cleanup-image /restorehealth /source:wim:X:\sources\install.wim:1 /limitaccess
(Mount your Server 2025 ISO and replace X: with the ISO drive letter). This repairs the component store from the source image.
If the server is a Domain Controller or holds FSMO roles
- Mount the Server 2025 ISO on the server, run Setup from the mounted image and choose “Keep personal files and apps” (standard in‑place repair/upgrade). That replaces system files and finishes servicing steps that otherwise choke during SAFE_OS. This approach is widely recommended when servicing/commit fails. Example of launching setup from the ISO is shown here.
If you must preserve settings/roles/users (your stated goal)
- Consider building a new Server 2025 VM/host and migrating roles (preferred long‑term). In‑place upgrades of DCs are supported but riskier; best practice often recommends introducing a new OS server and transferring roles rather than upgrading an existing DC in place. Backup system state first if you proceed with in‑place.
Option A — In‑place repair upgrade (keeps roles/settings) — try after the triage steps above (suspend BitLocker, remove third‑party blockers, run DISM with ISO as source). This is the fastest route to preserve everything.
Option B — Safer/cleaner (recommended for DCs / critical servers)
If you want me to look at the specific cause
- Build a new Server 2025 system (VM or physical), join it to the domain, promote as DC (if AD), transfer FSMO and other roles, migrate file/print/DFS/WSUS settings, then demote the old server. This keeps uptime and is less fragile than an in‑place upgrade. Many admins prefer this route for servers with important roles.
Final notes / cautions
- Attach the three log files I listed (setuperr.log, setupact.log, miglog.xml / Panther files). If you can paste the last ~40 lines from setuperr.log and the matching section from setupact.log where the 0x8007042B appears I’ll parse it and tell you the exact component/service/driver that failed and the next precise action.
If you want, paste the setuperr.log and setupact.log excerpts here (or attach the full files) and I’ll read them and give the exact step to fix the blocker.
- Always take a full system backup and a system state backup (for AD) before an in‑place upgrade.
- If you hit repeated SAFE_OS Migrate Data failures pointing at a particular driver/service, remove that software before retrying. If the logs point to “Access denied” or locked files, the offending service will need to be stopped/uninstalled.
Attachments
classic57b
Member
- Joined
- Sep 11, 2025
- Messages
- 7
- Thread Author
-
- #9
Re-submitting requested files. Please tell me the exact component/service/driver that failed and the next precise action.Thanks — that error (0x8007042B‑0x2000D, “Failed on the SAFE_OS phase — Migrate Data”) means setup is failing during the offline portion where it migrates data, drivers and services before the final commit. That phase commonly trips over open handles, incompatible drivers/services, encryption/volume issues or role-specific blockers (AD/DC, Hyper‑V, etc.). Below is a focused troubleshooting plan you can run through (with safe fallbacks if you prefer a different approach).
Why this fails (short list)
What to collect first (logs — these tell the root cause)
- Third‑party drivers or agents (backup, AV, storage filter drivers) blocking file/service migration. This is a very common cause.
- Corrupt component store or servicing/commit problems that prevent post‑reboot commit. Running a repair/repair‑upgrade is often required.
- Encrypted / locked volumes (BitLocker, iSCSI mounted LUNs, dynamic disks) or insufficient free space on C: or on EFI/Reserved partitions. These block the SAFE_OS / Migrate Data step.
- Domain controller / AD DS role or other server roles that must be handled specially — in‑place upgrades of DCs are riskier and have special guidance.
Quick commands to extract the most useful lines
- After the failed attempt, zip and attach these files (or paste relevant lines):
- C:\$WINDOWS.~BT\Sources\Panther\setuperr.log
- C:\$WINDOWS.~BT\Sources\Panther\setupact.log
- C:\Windows\Panther\setupact.log (if present)
- Any files under C:\$WINDOWS.~BT\Sources\Panther\UserData\ (miglog.xml / Tend to show which component failed)
- If available, C:\Windows\Logs\CBS\cbs.log (last 200 lines) and C:\Windows\Logs\DISM\dism.log
Immediate, reversible triage (try these before re-running setup)
- Open elevated cmd and run:
- findstr /i "0x8007042B 0x2000D Migrate" C:\$WINDOWS.~BT\Sources\Panther*.log
- findstr /i "error fail" C:\$WINDOWS.~BT\Sources\Panther*.log
- (If you want to copy last lines) powershell -Command "Get-Content C:\Windows\Logs\CBS\cbs.log -Tail 200"
Try an in‑place repair upgrade (recommended when DISM/SFC don’t fix it)
- Suspend BitLocker (if enabled):
- manage-bde -protectors -disable C: (re-enable after upgrade).
- Disconnect any nonessential disks/USB drives and unmount any iSCSI volumes.
- Uninstall or fully disable third‑party AV, backup agents, security/endpoint tools and any storage filter drivers (eg ReFS filter, vendor iSCSI drivers). Reboot.
- Stop or uninstall non‑Microsoft services you can do without during the upgrade (backup services, third‑party replication).
- Free space: make sure C: has lots of free space (20–30 GB min) and check System Reserved/EFI free space if UEFI (can cause related errors).
- Run the checks you already ran again and add an offline source for DISM:
- sfc /scannow
- dism /online /cleanup-image /restorehealth /source:wim:X:\sources\install.wim:1 /limitaccess
(Mount your Server 2025 ISO and replace X: with the ISO drive letter). This repairs the component store from the source image.
If the server is a Domain Controller or holds FSMO roles
- Mount the Server 2025 ISO on the server, run Setup from the mounted image and choose “Keep personal files and apps” (standard in‑place repair/upgrade). That replaces system files and finishes servicing steps that otherwise choke during SAFE_OS. This approach is widely recommended when servicing/commit fails. Example of launching setup from the ISO is shown here.
If you must preserve settings/roles/users (your stated goal)
Thanks — that error (0x8007042B‑0x2000D, “Failed on the SAFE_OS phase — Migrate Data”) means setup is failing during the offline portion where it migrates data, drivers and services before the final commit. That phase commonly trips over open handles, incompatible drivers/services, encryption/volume issues or role-specific blockers (AD/DC, Hyper‑V, etc.). Below is a focused troubleshooting plan you can run through (with safe fallbacks if you prefer a different approach).
Why this fails (short list)
What to collect first (logs — these tell the root cause)
- Third‑party drivers or agents (backup, AV, storage filter drivers) blocking file/service migration. This is a very common cause.
- Corrupt component store or servicing/commit problems that prevent post‑reboot commit. Running a repair/repair‑upgrade is often required.
- Encrypted / locked volumes (BitLocker, iSCSI mounted LUNs, dynamic disks) or insufficient free space on C: or on EFI/Reserved partitions. These block the SAFE_OS / Migrate Data step.
- Domain controller / AD DS role or other server roles that must be handled specially — in‑place upgrades of DCs are riskier and have special guidance.
Quick commands to extract the most useful lines
- After the failed attempt, zip and attach these files (or paste relevant lines):
- C:\$WINDOWS.~BT\Sources\Panther\setuperr.log
- C:\$WINDOWS.~BT\Sources\Panther\setupact.log
- C:\Windows\Panther\setupact.log (if present)
- Any files under C:\$WINDOWS.~BT\Sources\Panther\UserData\ (miglog.xml / Tend to show which component failed)
- If available, C:\Windows\Logs\CBS\cbs.log (last 200 lines) and C:\Windows\Logs\DISM\dism.log
Immediate, reversible triage (try these before re-running setup)
- Open elevated cmd and run:
- findstr /i "0x8007042B 0x2000D Migrate" C:\$WINDOWS.~BT\Sources\Panther*.log
- findstr /i "error fail" C:\$WINDOWS.~BT\Sources\Panther*.log
- (If you want to copy last lines) powershell -Command "Get-Content C:\Windows\Logs\CBS\cbs.log -Tail 200"
Try an in‑place repair upgrade (recommended when DISM/SFC don’t fix it)
- Suspend BitLocker (if enabled):
- manage-bde -protectors -disable C: (re-enable after upgrade).
- Disconnect any nonessential disks/USB drives and unmount any iSCSI volumes.
- Uninstall or fully disable third‑party AV, backup agents, security/endpoint tools and any storage filter drivers (eg ReFS filter, vendor iSCSI drivers). Reboot.
- Stop or uninstall non‑Microsoft services you can do without during the upgrade (backup services, third‑party replication).
- Free space: make sure C: has lots of free space (20–30 GB min) and check System Reserved/EFI free space if UEFI (can cause related errors).
- Run the checks you already ran again and add an offline source for DISM:
- sfc /scannow
- dism /online /cleanup-image /restorehealth /source:wim:X:\sources\install.wim:1 /limitaccess
(Mount your Server 2025 ISO and replace X: with the ISO drive letter). This repairs the component store from the source image.
If the server is a Domain Controller or holds FSMO roles
- Mount the Server 2025 ISO on the server, run Setup from the mounted image and choose “Keep personal files and apps” (standard in‑place repair/upgrade). That replaces system files and finishes servicing steps that otherwise choke during SAFE_OS. This approach is widely recommended when servicing/commit fails. Example of launching setup from the ISO is shown here.
If you must preserve settings/roles/users (your stated goal)
- Consider building a new Server 2025 VM/host and migrating roles (preferred long‑term). In‑place upgrades of DCs are supported but riskier; best practice often recommends introducing a new OS server and transferring roles rather than upgrading an existing DC in place. Backup system state first if you proceed with in‑place.
Option A — In‑place repair upgrade (keeps roles/settings) — try after the triage steps above (suspend BitLocker, remove third‑party blockers, run DISM with ISO as source). This is the fastest route to preserve everything.
Option B — Safer/cleaner (recommended for DCs / critical servers)
If you want me to look at the specific cause
- Build a new Server 2025 system (VM or physical), join it to the domain, promote as DC (if AD), transfer FSMO and other roles, migrate file/print/DFS/WSUS settings, then demote the old server. This keeps uptime and is less fragile than an in‑place upgrade. Many admins prefer this route for servers with important roles.
Final notes / cautions
- Attach the three log files I listed (setuperr.log, setupact.log, miglog.xml / Panther files). If you can paste the last ~40 lines from setuperr.log and the matching section from setupact.log where the 0x8007042B appears I’ll parse it and tell you the exact component/service/driver that failed and the next precise action.
If you want, paste the setuperr.log and setupact.log excerpts here (or attach the full files) and I’ll read them and give the exact step to fix the blocker.
- Always take a full system backup and a system state backup (for AD) before an in‑place upgrade.
- If you hit repeated SAFE_OS Migrate Data failures pointing at a particular driver/service, remove that software before retrying. If the logs point to “Access denied” or locked files, the offending service will need to be stopped/uninstalled.
- Consider building a new Server 2025 VM/host and migrating roles (preferred long‑term). In‑place upgrades of DCs are supported but riskier; best practice often recommends introducing a new OS server and transferring roles rather than upgrading an existing DC in place. Backup system state first if you proceed with in‑place.
Option A — In‑place repair upgrade (keeps roles/settings) — try after the triage steps above (suspend BitLocker, remove third‑party blockers, run DISM with ISO as source). This is the fastest route to preserve everything.
Option B — Safer/cleaner (recommended for DCs / critical servers)
If you want me to look at the specific cause
- Build a new Server 2025 system (VM or physical), join it to the domain, promote as DC (if AD), transfer FSMO and other roles, migrate file/print/DFS/WSUS settings, then demote the old server. This keeps uptime and is less fragile than an in‑place upgrade. Many admins prefer this route for servers with important roles.
Final notes / cautions
- Attach the three log files I listed (setuperr.log, setupact.log, miglog.xml / Panther files). If you can paste the last ~40 lines from setuperr.log and the matching section from setupact.log where the 0x8007042B appears I’ll parse it and tell you the exact component/service/driver that failed and the next precise action.
If you want, paste the setuperr.log and setupact.log excerpts here (or attach the full files) and I’ll read them and give the exact step to fix the blocker.
- Always take a full system backup and a system state backup (for AD) before an in‑place upgrade.
- If you hit repeated SAFE_OS Migrate Data failures pointing at a particular driver/service, remove that software before retrying. If the logs point to “Access denied” or locked files, the offending service will need to be stopped/uninstalled.
Attachments
Similar threads
- Solved
- Replies
- 4
- Views
- 139
- Solved
- Replies
- 3
- Views
- 1K
- Article
- Replies
- 0
- Views
- 395
- Replies
- 0
- Views
- 128
- Article
- Replies
- 0
- Views
- 1K