• Thread Author
Microsoft’s August cumulative for Windows 11 (KB5063878, OS Build 26100.4946) has been tied by multiple independent community tests and tech outlets to a serious storage regression: under sustained, large sequential writes some SSDs can stop responding, disappear from Windows, and — in a minority of reports — remain inaccessible after a reboot, potentially exposing written files to corruption or loss. (support.microsoft.com, notebookcheck.net, guru3d.com)

A blue-lit motherboard with a slim SSD hovering above glowing circuit lines.Background / Overview​

Microsoft released KB5063878 as the August 12, 2025 cumulative security and quality update for Windows 11 24H2 (OS Build 26100.4946). The official KB page lists the package contents and standard installation guidance but, at the time community reporting surfaced, did not list a storage-device failure as a known issue on the Microsoft support entry. (support.microsoft.com)
Within days of the rollout a cluster of community reproductions and specialist write-ups surfaced a consistent failure fingerprint: when performing heavy, continuous file writes — typically in the tens of gigabytes range (community tests commonly cite ~50 GB and above) — some drives abruptly stop responding, disappear from Device Manager and Disk Management, and present unreadable SMART/controller telemetry. In many cases a reboot restores visibility temporarily, but the condition can recur under the same workload; in a smaller subset of reports the drive remained inaccessible or exhibited signs of corruption after restart. (notebookcheck.net, igorslab.de)
This article summarizes the evidence collected so far, explains why the problem is technically plausible, evaluates the scope and quality of the reporting, flags unverified claims, and offers practical mitigations and investigatory steps for both consumers and IT administrators.

What users are reporting — symptoms and reproducibility​

Core symptom profile​

  • The drive disappears from Windows (File Explorer, Device Manager, Disk Management) mid-write.
  • SMART and controller data become unreadable to host utilities.
  • Rebooting the system sometimes restores device visibility; the issue frequently reappears when the same heavy write workload is applied.
  • Files in-progress when the fault occurs may be incomplete or corrupted; in rare user reports the drive does not return to service without vendor intervention. (notebookcheck.net, guru3d.com)

Typical trigger workload​

Community tests converge on a reproducible pattern: sustained sequential writes — large game updates, mass file copies, archive extraction, or disk-cloning operations — are the common trigger. Testers often report a threshold in the region of ~50 GB and above, and note controller utilization spikes (e.g., ~60% or higher) during repro. That workload profile stresses the drive’s cache, metadata updates, and host/OS buffer paths simultaneously. (igorslab.de, windowsforum.com)

Which drives are implicated so far​

Early, community-compiled lists and hands-on reproductions repeatedly highlight drives using Phison controller families — particularly some DRAM-less configurations — but the phenomenon is not strictly limited to Phison-based products. Branded models called out across different reports include Corsair MP600 (and other E12/E16 variants), Kioxia Exceria Plus G4, SanDisk Extreme Pro M.2 NVMe 3D, and several third-party OEM drives. Additional follow-ups mention WD Blue SN5000, WD Red SA500, Crucial P3 Plus and others in mixed recovery profiles (some recover after reboot, others do not). These lists are community-sourced and should be treated as investigative leads, not definitive recall inventories. (guru3d.com, igorslab.de)

Why this is technically plausible​

The reported behavior matches known failure modes where host-side changes in buffering, NVMe driver timing, or HMB allocations expose latent firmware bugs in SSD controllers:
  • NVMe Host Memory Buffer (HMB) lets DRAM-less SSDs borrow host RAM for caching. That creates tight coupling between the OS, NVMe driver, and controller firmware. Changes in how Windows allocates or manages HMB, or subtle timing/regression in the kernel I/O path, can push certain firmware code paths into unexpected states. Historical 24H2-era incidents (HMB-related BSODs and instability) demonstrate how an OS change can reveal firmware edge cases.
  • Sustained sequential writes exercise the controller’s metadata and garbage-collection behavior. If the controller firmware encounters an unhandled condition (e.g., a command ordering or resource exhaustion scenario), it may “lock up” or become non-responsive. The OS then treats the device as removed from the PCIe topology, resulting in the drive vanishing from the system and SMART data becoming unreadable. (guru3d.com, windowsforum.com)
  • The pattern of temporary recovery after reboot but recurrent failure under the same workload strongly suggests a timing or resource edge case rather than wholesale physical flash die failure in most instances. That said, a firmware crash triggered repeatedly can corrupt controller metadata and may lead to persistent data loss in the worst case. (igorslab.de, notebookcheck.net)

How reliable is the evidence?​

Strengths​

  • Multiple independent community testers reproduced a consistent set of symptoms across different platforms and drives, which increases credibility relative to isolated anecdote. Outlets that aggregated those tests (enthusiast sites and hardware bloggers) produced reproducible trigger procedures and model lists used for follow-up testing. (notebookcheck.net, guru3d.com)
  • The technical profile of the failure (HMB/DRAM-less controller sensitivity, sustained sequential writes) aligns with established storage failure patterns previously documented in the Windows 11 24H2 lifecycle. That makes the reported behavior plausible and not an implausible attribution.

Limitations and gaps​

  • There is no single authoritative vendor or Microsoft bulletin (at the time of the early reports) that universally confirms KB5063878 as the definitive root cause across all affected models. Microsoft’s official KB for KB5063878 initially listed no known storage-related issues. That absence matters: vendor telemetry and Microsoft-sourced diagnostics are required to establish causation and to coordinate fixes. (support.microsoft.com)
  • The community model lists are useful signals but suffer from sampling bias: they focus on enthusiasts who run heavy writes and do forensic testing. Many consumer systems with lighter workloads may never reproduce the fault, so incidence across the installed base remains uncertain.
  • Some headlines have used words like “destroy” or “permanently bricked” which overstate the established evidence. While some users report non-recoverable drives after the event, the majority of community reproductions show temporary disappearance and recurrence rather than guaranteed hardware death. Those worst-case accounts are serious and deserve attention, but they are less well-correlated and require vendor analysis to separate firmware-corruption-driven permanent loss from coincident hardware failures. Flagging those scenarios as high-risk but not universally confirmed is the responsible position. (ghacks.net, overclock3d.net)

Immediate practical guidance (consumer and IT admin checklists)​

The guidance below synthesizes community mitigations, vendor best practices, and Microsoft update mechanics.

For consumers and power users​

  • Pause heavy writes: If you installed KB5063878 (or KB5062660 preview builds) and your system uses an NVMe/SSD for important data, postpone large sequential transfers (game installs/updates, bulk media copies, disk cloning) until the situation is clarified. Community reproduce windows commonly cite ~50 GB sustained writes as the trigger. (notebookcheck.net)
  • Back up now: If a drive shows instability, back up all accessible data immediately to a known-good device or cloud storage. Avoid repeating risky write workloads on the suspect drive.
  • Check firmware and vendor tools: Run your SSD vendor’s dashboard (e.g., Corsair/NVMe Toolbox, WD Dashboard, Crucial Storage Executive) to check firmware version and SMART status, and follow vendor instructions for firmware updates. Historically, firmware patches have resolved similar controller edge cases. Do not update firmware without a backup and follow vendor steps carefully.
  • Capture diagnostics if you can: If the failure reproduces, capture Event Viewer logs (Windows Logs → System) around the time of the incident and save vendor diagnostic logs; they help vendors and Microsoft diagnose root cause. Avoid repeated reboots that may further write to the device.
  • Avoid uninstall shortcuts unless you understand the risks: Removing a combined LCU+SSU package is non-trivial. Microsoft’s KB notes that the servicing stack (SSU) inclusion can restrict removal methods and suggests using DISM /Remove-Package with the exact LCU package name for removal; follow official guidance carefully. (support.microsoft.com)

For IT administrators and enterprise​

  • Inventory endpoints for exposed devices (DRAM-less NVMe, Phison families, the community model lists).
  • Hold deployment of KB5063878 to sensitive fleets until vendor/Microsoft guidance is available or until you can test firmware-validated platforms.
  • Use WSUS/SCCM controls and deployment rings to stage updates; note that KB deployment via WSUS/SCCM reportedly encountered error 0x80240069 in some environments earlier in the rollout, and Microsoft applied mitigations for that distribution problem. Monitor Windows Update Agent logs for related errors. (windowslatest.com)
  • Temporary registry mitigation (emergency, not a fix): Community-sourced emergency mitigations used previously involve adjusting HMB allocation policies via registry keys (e.g., HmbAllocationPolicy under storport/stornvme parameter locations). This reduces or disables HMB behavior and may stop the symptom in DRAM-less drives, but it reduces performance and is not a long-term fix. Treat such edits as emergency stopgaps only and document changes for rollback. (windowsforum.com)

How to confirm whether the update is installed and (if necessary) remove it safely​

  • To check whether KB5063878 is installed: run winver.exe (should show build 26100.4946) or check Settings → Windows Update → Update history. Microsoft’s KB page documents the package and includes file lists. (support.microsoft.com)
  • To remove the LCU: Microsoft’s KB entry explains that because the combined package includes the servicing stack update (SSU), normal wusa.exe uninstall options might not work — instead, the LCU package name must be identified and the DISM /Remove-Package command used. Follow Microsoft’s documented steps and ensure you have backups before attempting removal. Uninstalling OS servicing packages on production machines can have side effects; coordinate with support or a sysadmin. (support.microsoft.com)

Vendor and Microsoft response — where things stand (early picture)​

  • Microsoft’s KB page for KB5063878 lists the update details, but at the initial community reporting juncture did not identify the storage disappearance scenario as a known issue; Microsoft independently addressed a separate WSUS/SCCM deployment error (0x80240069) via servicing controls. That divergence (quick acknowledgement for enterprise install errors versus no initial storage known-issue entry) left community researchers to carry out hands-on reproducibility tests and to contact vendors. (support.microsoft.com, windowslatest.com)
  • SSD vendors at the time of early reporting had not issued a uniform cross-vendor bulletin explicitly linking KB5063878 to specific firmware faults in all listed models. Some vendors historically release targeted firmware updates when telemetry reveals controller faults; others require a clear, reproducible trigger pattern and vendor-side logs to issue a patch. The absence of a consolidated cross-vendor advisory means follow-up confirmatory work and coordinated firmware/driver fixes may take time. Community testing and vendor telemetry will determine whether fixes arrive via vendor firmware or a Microsoft servicing update. (igorslab.de)

Technical risk assessment — who should be most concerned​

  • High concern: Users and professionals who routinely perform large sequential writes to NVMe/SSDs (gamers updating large games, video editors exporting large projects, backup/cloning operations). Their workflows match the reported trigger patterns and thus have elevated exposure. (notebookcheck.net)
  • Moderate concern: Systems using DRAM-less NVMe SSDs, particularly those with Phison-family controllers, which were overrepresented in early reproductions. These drives are more reliant on HMB and therefore more sensitive to host-side memory allocation behavior. (igorslab.de)
  • Lower concern: Casual users who rarely execute sustained tens-of-gigabytes sequential writes; many such users may never observe the fault. Still, the presence of recovery-edge cases means prudent backup hygiene is always warranted.

Recommended step-by-step checklist (concise)​

  • Verify: Run winver.exe or Settings → About to confirm your build (26100.4946 = KB5063878). (support.microsoft.com)
  • Backup: Copy critical data to another drive or cloud immediately if the suspect SSD is accessible.
  • Avoid heavy writes: Don’t install large games, compress/extract big archives, or run bulk backups to the drive until you have guidance. (notebookcheck.net)
  • Check firmware: Run vendor dashboard/tools and update firmware if a vendor advisory exists. Back up before flashing.
  • Capture logs: If you see the fault, save Event Viewer logs and SMART output (smartctl/CrystalDiskInfo) and stop further writes.
  • If managing fleets: Pause broad KB deployment in pilot rings; require firmware confirmation before upgrading exposed endpoints; use WSUS/SCCM controls to stage rollout. (windowslatest.com)

Long view and closing analysis​

Windows and storage ecosystems are tightly coupled: OS buffering policies, NVMe driver behavior, and SSD controller firmware must work in concert. When a widely distributed OS update subtly changes timing or resource allocation, latent controller bugs can surface — sometimes only under demanding workloads. The KB5063878 reports fit that pattern: reproducible symptoms under sustained writes, an over-representation of certain controller families in community tests, and a mix of recoverable and unrecoverable outcomes. (igorslab.de)
The good news is that historically these incidents are resolvable through coordinated vendor firmware updates or targeted OS mitigations. The cautionary reality is that such fixes require precise telemetry, vendor analysis, and careful packaging; until those appear, the prudent path for users and admins is conservative: back up, avoid risky writes, and stage updates.
Finally, early community testing is valuable and has historically accelerated vendor responses — but community lists and thresholds (the 50 GB number, the 60% controller load, specific model lists) are indicators, not immutable technical absolutes. Treat the numbers as practical heuristics for risk management, and watch for formal vendor or Microsoft advisories before drawing final conclusions about permanent hardware damage. (windowsforum.com)
Conclusion
The initial reports tying KB5063878 to SSD disappearances during heavy writes are credible, technically plausible, and supported by multiple independent reproductions — but they are not yet a finalized vendor-validated root cause across the entire market. Until vendors and Microsoft publish coordinated diagnostics and fixes, users who run sustained large writes or rely on DRAM‑less/Phison-based SSDs should act cautiously: back up, avoid large sequential transfers on recently patched systems, and prioritize firmware checks and staged update policies for critical endpoints. (support.microsoft.com, notebookcheck.net, igorslab.de)

Source: Tom's Hardware Latest Windows 11 security patch might be breaking SSDs under heavy workloads — users report disappearing drives following file transfers, including some that cannot be recovered after a reboot
 

Microsoft’s August cumulative for Windows 11 24H2 (KB5063878, OS Build 26100.4946) has been tied by community testers and a handful of technology outlets to a reproducible storage regression: under sustained, large sequential write workloads some NVMe SSDs can stop responding, disappear from the operating system, and — in a minority of reports — return unreadable SMART/controller telemetry or show file corruption after reboot. This article summarizes the BornCity coverage of the incident, cross‑checks the core claims against vendor and independent reporting, explains the most plausible technical mechanisms, and lays out practical mitigations for consumers and IT administrators who must manage risk while Microsoft and drive vendors investigate. (support.microsoft.com)

Tech workspace with a motherboard on a desk and a monitor displaying Windows.Background / Overview​

Microsoft released KB5063878 as the August 12, 2025 cumulative update for Windows 11 version 24H2 (OS Build 26100.4946). The official Microsoft support article lists the package contents and standard installation guidance and, at the time of the initial reports, stated that Microsoft was not currently aware of any issues with the update. That same KB page also documents how the combined Servicing Stack Update (SSU) and Latest Cumulative Update (LCU) are delivered and explains the supported removal method for the LCU (DISM Remove‑Package), not via simple wusa uninstall. (support.microsoft.com)
Within days of rollout two distinct reliability problems surfaced in parallel: an enterprise deployment regression that produced the WSUS/SCCM install error 0x80240069 and a separate cluster of community‑reproduced storage failures affecting a subset of SSDs during heavy write workloads. Microsoft acknowledged and mitigated the enterprise install issue via known‑issue rollback/servicing controls; the storage symptom set was reported widely by community testers and specialist outlets but — at publication time — lacked a formal global “known issue” entry in the Microsoft KB page. (windowslatest.com, neowin.net)

What BornCity reported and why it matters​

BornCity’s itemized report summarized early community reproductions linking KB5063878 to drives “disappearing” during sustained large writes, and it flagged the concurrent WSUS/SCCM install failures as a separate but consequential reliability regression. The piece emphasized typical trigger profiles (large sequential writes in the ~50 GB range), the reproducibility of the symptom in multiple independent lab reproductions, and the preliminary hypothesis that the fault reflected a host-side change exposing latent SSD firmware/controller bugs.
BornCity’s coverage is valuable because it collates firsthand community data and points administrators to both the enterprise install mitigation and the storage‑risk warnings that are most relevant to day‑to‑day maintenance: the storage failure is not a sporadic one‑off; it consistently appears under an identifiable workload and thus represents a real data‑integrity vector for heavy‑I/O operations like bulk game updates, disk cloning, or mass media file transfers. That characterization is consistent with independent hands‑on accounts from enthusiast publications and forums.

Independent corroboration: what other outlets found​

Multiple independent outlets and storage/enthusiast communities have published similar reproductions and warnings. Key, independently reported points include:
  • Reproducible symptom fingerprint: Drives can become unresponsive mid‑write and vanish from Device Manager and Disk Management; SMART/controller telemetry can be unreadable while the drive is offline; files written during the failure window are at risk of corruption. These accounts appear in community threads and specialist coverage. (guru3d.com, tomshardware.com)
  • Trigger workload: Testers repeatedly reproduce the issue during sustained sequential writes typically in the tens of gigabytes (community tests often cite ~50 GB as a practical trigger threshold). The workload profile stresses caching, metadata updates, and host/OS buffering simultaneously. (guru3d.com)
  • Candidate controllers and models: Early repros over‑represent certain controller families — especially Phison‑based controllers and some DRAM‑less or older designs — though the phenomenon is not strictly limited to one vendor or controller. Published lists vary by reporter and are evolving. (guru3d.com)
  • Enterprise deployment issue: Separately, administrators saw WSUS/SCCM installations fail with error 0x80240069; Microsoft acknowledged that install path was affected and applied a Known Issue Rollback (KIR) while preparing a servicing fix. This is a confirmed fact and has been communicated publicly. (windowslatest.com, neowin.net)
Taken together, these independent reports corroborate BornCity’s core observations: there is a reproduced pattern of severe storage regression tied temporally to the August cumulative update, and a separate enterprise install delivery bug that Microsoft has addressed. (support.microsoft.com)

Technical analysis — what’s most likely happening​

The host‑firmware interaction model​

Modern NVMe SSDs are complex embedded systems: the controller firmware, on‑drive DRAM (when present), NAND management logic, and host interactions (driver, OS buffer/cache, HMB) form a single, stateful system. A seemingly minor change in how Windows schedules or allocates host resources during sustained writes can expose latent firmware edge cases or timing assumptions inside a controller.
Two technical mechanisms are particularly plausible in this incident:
  • Host Memory Buffer (HMB) and allocation policy interaction. DRAM‑less NVMe SSDs use HMB to borrow host RAM for mapping and caching. Previous Windows 11 24H2 interactions with HMB allocations caused BSODs for certain Western Digital/SanDisk drives until firmware updates and mitigation blocks were applied. A change in Windows’ buffer or HMB behavior can stress a firmware path that previously went untested at scale. (windowslatest.com, tomshardware.com)
  • IO path timing and controller metadata edge cases. Sustained sequential writes trigger aggressive metadata updates and the particular caches on DRAM‑less drives. If Windows’ storage stack or an updated driver path alters pacing, queue depths, or DMA handoffs, an SSD controller with a fragile state machine can lock or crash, presenting as an offline drive with unreadable SMART. Several community analyses point to controller lock‑ups rather than an ordinary kernel crash. (guru3d.com)

Why some drives and not others​

Not all SSDs are equally susceptible. Differences that amplify risk include:
  • Use of DRAM vs. DRAM‑less designs (DRAM‑less drives depend on HMB and are more sensitive to host behavior).
  • Controller family and firmware maturity (some Phison firmware versions have recurred in repro lists).
  • Drive capacity and internal mapping strategies (large writes can stress different internal channels).
  • System BIOS/platform drivers (Intel VMD vs. storNVMe host driver can change behavior).
This heterogeneity explains why the fault reproduces on some systems and not on others: the incident likely requires a narrow combination of workload, controller state, firmware, and host driver timing. (guru3d.com, neowin.net)

Assessing the claims: what is verified, what remains unproven​

Verified facts
  • Microsoft published KB5063878 on August 12, 2025 for Windows 11 24H2 (OS Build 26100.4946). The KB page lists package details and, at initial publication, reported no known issues. (support.microsoft.com)
  • Microsoft acknowledged and mitigated an enterprise install failure affecting WSUS/SCCM clients (0x80240069) and issued servicing controls to address the delivery regression. (windowslatest.com, neowin.net)
  • Community testers and several specialist outlets (storage‑focused sites and enthusiast forums) reproduced a pattern where some SSDs become inaccessible during sustained, large writes after the update. Those reproductions show consistent trigger profiles and symptom fingerprints. (guru3d.com, tomshardware.com)
Unproven or still‑open points
  • A vendor‑level, end‑to‑end causal attribution directly tying the storage failures to a specific change inside KB5063878 has not been published by Microsoft (no global “known issue” entry acknowledging a storage regression was in the KB at the time of the early community reports). Microsoft and SSD manufacturers must still correlate telemetry to prove causality beyond reasonable community suspicion. Treat the storage claim as a high‑confidence early warning rather than a fully vendor‑validated root‑cause statement. (support.microsoft.com)
  • The overall incidence rate (how many drives among the installed base are affected) is still uncertain; available reproducibility shows clear signal in specific controller/firmware combinations but not across all SSDs. Broad prevalence data from vendor telemetry is necessary to size the real risk. (guru3d.com)

Practical risk management — immediate steps for consumers​

If your device is already patched with KB5063878, triage risk based on usage patterns and hardware:
  • High‑risk scenarios (take immediate action)
  • Workstations performing sustained large writes (disk cloning, large game updates, bulk backups).
  • Systems with DRAM‑less NVMe drives or known at‑risk models (community lists often flag Phison‑based models).
  • Machines with business‑critical data where even brief offline behavior could corrupt backups or replicas.
  • Recommended immediate actions
  • Create or verify an up‑to‑date backup: image the system, and copy any mission‑critical files to an external, unaffected medium or cloud storage.
  • Avoid large sequential write operations while investigations continue; postpone bulk file transfers, large installers, and disk clones when possible. (guru3d.com, tomshardware.com)
  • Check SSD firmware: visit the drive vendor’s support pages and apply any firmware updates that explicitly address compatibility with Windows 11 24H2 or that list stability improvements. Firmware fixes have resolved similar HMB issues in the past and are a low‑regret action. (tomshardware.com)
  • Monitor Microsoft’s Release Health and the SSD vendor advisories for definite fixes and documented mitigation steps. (support.microsoft.com)
  • Emergency recovery steps if a drive disappears mid‑write
  • Do not immediately reformat. Record Event Viewer logs (System) and capture timestamps for failed writes.
  • If the drive reappears after reboot, run vendor diagnostic tools and SMART reads, and perform a surface/consistency check (chkdsk or vendor utilities) before trusting files written during the incident.
  • If a drive is still unreadable after reboot, consult vendor recovery guidance — some firmware issues require vendor tools or reflashing, and some user reports show partial recovery after vendor intervention. Be aware: firmware reflashes can risk data loss; back up accessible data first. (guru3d.com)

Practical risk management — recommendations for IT administrators​

Enterprises must balance security patching against operational risk. The WSUS/SCCM install regression is already a confirmed problem Microsoft mitigated via KIR; storage risk is an evolving investigation that requires pragmatic controls.
  • Short‑term controls
  • Pause or defer KB5063878 deployment in high‑risk rings (imaging, build servers, bulk storage hosts, and application servers that perform large data writes) until vendors and Microsoft publish definitive guidance. Use existing ring‑based deployment and pilot testing to limit exposure. (windowslatest.com)
  • Apply Microsoft’s recommended KIR or follow documented workarounds for WSUS/SCCM if you experience 0x80240069. Consult the Microsoft release‑health and KB documentation for the specific remediation steps. (windowslatest.com, neowin.net)
  • Block heavy write workflows (cloning, mass deploys) on newly patched test hosts to confirm stability before wider rollout. Maintain pre‑patch images so you can quickly roll affected endpoints back for forensic capture if needed.
  • Investigative and forensic checklist (recommended)
  • Collect Event Viewer System logs at the time of failure and run pnputil /enum-drivers and Get‑HotFix to capture installed packages and drivers.
  • Record exact SSD model, firmware version, controller family, and host driver (stornvme vs. vendor driver vs. Intel VMD).
  • Run vendor diagnostic tools to capture SMART and controller telemetry before and after reproductions.
  • If data corruption is suspected, isolate the device and consult vendor RMA or recovery procedures — do not reinitialize in ways that prevent vendor analysis.
  • Longer‑term posture
  • Expand driver and firmware inventory scanning in endpoint management workflows to ensure devices are on vendor‑recommended firmware baselines before mass patching.
  • Add HMB and controller family awareness into asset classification for storage‑sensitive workloads; build firmware remediation playbooks for DRAM‑less NVMe fleets.
  • Maintain a documented rollback strategy (images, backups, and automated restore playbooks) in case a patch must be removed or a compatibility hold applied.

What to watch next​

  • Microsoft telemetry. A vendor confirmation tying the storage regression to a specific LCU code path or storage‑stack change would convert the community warning into a formal known issue and trigger a prioritized fix or KIR. Monitor Microsoft Release Health and the KB article for an explicit storage‑related known‑issue entry. (support.microsoft.com)
  • SSD vendor advisories and firmware updates. Vendors may publish lists of impacted models, firmware versions that address the failure, and recovery tools; applying approved firmware is the primary vendor remedy in past HMB cases. (tomshardware.com)
  • Independent test confirmations. Storage‑specialist sites and reproducible test scripts from community labs will either widen or narrow the list of implicated controllers; these independent confirmations are the best early signal for whether this is a narrow compatibility fault or a broader architectural change. (guru3d.com, tomshardware.com)

Final assessment — strengths, risks, and recommended posture​

  • Notable strengths of the current reporting
  • Rapid community triage and reproducibility delivered usable trigger profiles (sustained large sequential writes, ~50 GB threshold) and candidate model/controller lists that let users and admins perform targeted risk assessments. BornCity and multiple independent outlets aggregated a consistent set of symptoms that allowed quick, pragmatic mitigations to be recommended.
  • Microsoft’s servicing controls (KIR) for the WSUS/SCCM install regression show the company can quickly mitigate deployment‑path regressions while developing permanent fixes. (windowslatest.com)
  • Principal risks and limitations
  • Vendor confirmation of root cause is still pending. Without explicit telemetry from Microsoft and SSD manufacturers, the storage failure remains a high‑confidence community finding rather than a fully validated vendor statement. This leaves some ambiguity in recommended permanent remediations. (support.microsoft.com)
  • Data‑integrity risk is real for affected combinations. Even if the fault is not universal, a sufficiently narrow combination (driver + firmware + workload) can create unrecoverable damage for specific workloads. Administrators and power users must treat large‑I/O operations as risky on newly patched clients until vendor guidance is available. (tomshardware.com, guru3d.com)
  • Recommended posture (concise)
  • Backup first. Pause heavy writes on patched systems. Apply vendor firmware updates. Defer mass deployment in enterprise rings until Microsoft and SSD vendors publish a coordinated remediation. Use Known Issue Rollback or Microsoft’s recommended WSUS/SCCM mitigations for deployment failures. (support.microsoft.com, windowslatest.com, tomshardware.com)

Conclusion​

The BornCity report on KB5063878 correctly captures a fast‑moving, technically plausible storage risk: a reproducible pattern in the community that links Microsoft’s August cumulative to SSD disappearances during sustained writes. Independent outlets and forum reproductions corroborate the symptom set and reproduce it under similar workloads, while Microsoft has already acknowledged and mitigated a separate enterprise deployment regression tied to the same update. What remains incomplete is vendor‑level, telemetry‑backed attribution that definitively ties the LCU to the controller‑level failures and quantifies the incident’s prevalence across installed hardware.
Until Microsoft and SSD manufacturers issue a unified, documented fix or firmware update, the pragmatic course for users and administrators is clear: prioritize backups, avoid heavy sequential writes on freshly patched endpoints, apply vendor firmware updates, and use phased pilot rings for any further deployments. Treat the community findings as a high‑priority early warning — actionable and corroborated, but still pending full vendor validation. (support.microsoft.com, guru3d.com, tomshardware.com)

Source: BornCity Windows 11 24H2: Does the Aug. 2025 update KB5063878 cause SSD errors? | Born's Tech and Windows World
 

A surge of community reports and hands‑on tests suggests that Microsoft’s August cumulative for Windows 11 24H2 — published as KB5063878 (OS Build 26100.4946) — can trigger a reproducible storage regression that makes some NVMe SSDs (and a few HDDs in isolated reports) “disappear” under sustained heavy writes, sometimes leaving files corrupted or controller SMART data unreadable; vendors and testers are racing to confirm root cause and deliver fixes while warning that the evidence remains community‑driven and incomplete. (tomshardware.com) (igorslab.de)

Blue-lit motherboard with glowing chips and a spray of spark-like particles rising from a processor.Background / Overview​

Microsoft shipped KB5063878 on August 12, 2025 as the monthly cumulative (SSU + LCU) for Windows 11 version 24H2. The public KB entry lists security and quality fixes but, at the time community testing surfaced, did not list a storage‑device failure as a known issue. (tomshardware.com)
Within days, independent testers and enthusiast outlets independently reproduced the same failure fingerprint: during sustained, sequential writes — commonly reported near or above ~50 GB — certain SSDs stop responding, vanish from Device Manager and Disk Management, and present unreadable SMART/controller telemetry; in a subset of cases the affected volume returns corrupted or the drive remains inaccessible after reboot. Multiple technical outlets and community threads compiled model lists and reproduction steps that consistently triggered failures in lab setups and real‑world scenarios. (guru3d.com, igorslab.de)
Phison, one of the world’s leading SSD controller designers, publicly acknowledged the situation and said it is “working with partners” to review controllers that “may have been affected,” without committing to a final root‑cause. That vendor statement increased credibility for the community reproductions but did not itself identify a definitive fault location (host vs. controller vs. combined). (tomshardware.com)

What users and testers are actually reporting​

Symptom cluster (consistent pattern)​

  • Large, sustained sequential writes (game installs, bulk archive extracts, backups) proceed normally and then abruptly fail near the ~50 GB mark. (tomshardware.com)
  • During the failure, the target drive vanishes from File Explorer, Device Manager and Disk Management. Vendor utilities cannot read SMART or controller registers. (guru3d.com)
  • Reboot sometimes restores temporary visibility. In other cases a partition is missing or files written during the failure are corrupted or lost. (notebookcheck.net, ghacks.net)
  • Community reproductions show the failure is not universal; many popular drives show no problem under the same workload. (pcgamesn.com)

Typical workload trigger​

  • Independent reproductions converge on sustained sequential writes in the tens of gigabytes — many reports cite roughly 50 GB as the point where controller/utilization spikes and the fault appears. (notebookcheck.net, guru3d.com)

Drives and controllers mentioned (community lists)​

Community test lists vary, but early compilations include multiple models and controller families. These lists are community‑sourced and evolving; treat them as investigative leads rather than vendor‑confirmed recalls.
  • Frequently mentioned as implicated: Corsair MP600, Corsair MP510, certain Phison PS5012‑E12-based SKUs, Kioxia Exceria Plus G4, SanDisk Extreme Pro M.2, Crucial P3 Plus, and several WD Blue/SA series units in some reproductions. (ghacks.net, notebookcheck.net)
  • Frequently reported as not affected in the same reproductions: Samsung 990 PRO, Samsung 980 PRO, Solidigm P44 Pro, and WD Black SN7100 in certain test rigs. (ghacks.net, guru3d.com)
Caveat: the exact hit list differs between testers because firmware revisions, motherboard firmware (BIOS/UEFI), storage drivers, and platform configurations change the exposure. These lists should not be taken as definitive vendor statements. (igorslab.de)

Why this looks like a host–firmware interaction, not a simple hardware failure​

The reproduced failure fingerprint — drive disappears mid‑write, SMART becomes unreadable, reboot sometimes restores visibility but not data integrity — points at a controller hang or lockup at the NVMe level. That behaviour commonly results from a timing or allocation mismatch between host‑side drivers (OS NVMe stack, StorPort, HMB allocation) and the SSD controller firmware.
  • Many affected drives are DRAM‑less or rely heavily on Host Memory Buffer (HMB) to borrow host RAM for mapping structures. If the host changes HMB allocation size, timing, or memory behavior, it can expose latent firmware bugs on the controller that only show under prolonged stress. This exact class of failure has precedent: earlier Windows 11 24H2 rollouts altered HMB behavior and triggered BSOD loops on certain WD/SanDisk models until firmware fixes were pushed. (igorslab.de)
  • Community kernel‑level analyses suggest the update may alter how Windows assigns or manages caching/buffer regions during high‑I/O workloads, producing an I/O profile that can push some controllers into an unrecoverable state. That is consistent with the mid‑write disappearance and unreadable SMART symptoms. However, a precise, telemetry‑backed root cause has not been publicly released by Microsoft or a vendor at the time of reporting. Phison’s statement confirms investigation but not attribution. (tomshardware.com, guru3d.com)
  • Importantly, hardware does fail in the wild: flash wears out; controllers produce corner‑case lockups; and HDDs can die independently. The challenge here is distinguishing natural hardware attrition from a reproducible regression triggered by a widely distributed update. Multiple independent reproductions that trigger the same behaviour on different drives and systems make the update‑related hypothesis plausible — but not conclusively proven without vendor/Microsoft telemetry.

What vendors and Microsoft have said (and not said)​

  • Phison: publicly acknowledged being “made aware” of the industry‑wide effects of KB5063878 and KB5062660 and said affected controllers are under review; Phison pledged to “provide updates and advisories” to partners. The statement confirms vendor awareness but stops short of assigning blame. (tomshardware.com)
  • Microsoft: the official KB article for KB5063878 originally listed standard security and quality fixes and did not immediately include a public “known issue” matching the storage symptoms when community reports emerged. Historically, Microsoft addresses critical regressions either via Known Issue Rollbacks (KIRs) or targeted mitigations while coordinating vendor firmware releases; whether and when a KIR or guidance will appear for this cluster is conditional on telemetry and vendor fixes. (tomshardware.com)
  • Other SSD vendors / outlets: independent technical outlets (Igor’s Lab, Guru3D, Notebookcheck, gHacks) performed collations and published investigative lists and reproduction steps. Several vendors appeared to be monitoring the situation; some advised users to check for firmware updates and to avoid heavy writes until guidance arrives. (windowsforum.com, guru3d.com)
Cautionary note: vendor statements so far aim to placate and investigate. They are not final confirmations that the Windows update caused permanent hardware failure across a broad installed base. That final attribution requires coordinated log/telemetry analysis and controlled lab confirmation.

Strengths and limits of the evidence so far​

Strengths​

  • Multiple independent reproductions by different testers and outlets converge on the same workload trigger (sustained tens‑of‑GB sequential writes). (notebookcheck.net, guru3d.com)
  • A major controller vendor (Phison) publicly acknowledged the issue and said affected controllers are under review, which reinforces that this is not mere rumor. (tomshardware.com)
  • Vendor‑style mitigations have precedence: Microsoft and SSD vendors have resolved related HMB/firmware regressions in past Windows updates through firmware releases and rollout controls, creating a pathway to remediation.

Limitations / open questions​

  • The bulk of public technical evidence originates from enthusiast labs and community testers; enterprise telemetry or Microsoft’s internal diagnostics have not been published. That leaves open the question of population‑scale impact.
  • Model lists differ between testers because firmware revisions, OEM firmware, motherboard BIOS versions, drivers, and even localized update bundles change exposure. No single definitive “affected‑models” bulletin exists from a primary vendor at publication time. (igorslab.de)
  • Some reported recoveries after reboot raise questions about whether permanent hardware damage is common or rare. File corruption risk during the affected window is real; permanent controller failure appears to be rarer but has been reported in isolated cases. (tomshardware.com)
Because of these limits, cautious administrators and enthusiasts should assume a plausible risk but not rush to declarative judgments about the entire SSD market.

Practical guidance for Windows users and administrators (conservative, prioritized)​

These steps prioritize data protection and easy operational controls. They echo community best practice and vendor guidance observed in previous update‑driven regressions.
  • Back up critical data immediately to an independent device or cloud. Backups are the only reliable protection against mid‑write corruption.
  • If KB5063878 is not yet installed and your workflow includes heavy writes (game installs, large media project transfers, cloning, local backups), delay the update and stage it in a test ring that mirrors your storage hardware. Enterprises should hold the update for representative validation. (guru3d.com)
  • Avoid sustained sequential writes (> ~50 GB) on drives you suspect may be vulnerable until you confirm firmware/driver status and vendor guidance. (notebookcheck.net)
  • Check vendor update utilities (Corsair iCUE, SanDisk Dashboard, Kioxia tool, WD Dashboard, etc.) and apply firmware only if it is vendor‑recommended and you have current backups. Do not flash firmware blindly. (guru3d.com)
  • For managed environments: use WSUS/SCCM or MDM controls to stage or block KB5063878 until validated. Microsoft historically issues Known Issue Rollbacks for urgent deployment issues and may do so here if telemetry warrants.
  • If a drive becomes inaccessible: stop writing to the drive, capture Event Viewer and NVMe logs, image the drive for recovery (professional services recommended for critical data), and contact vendor support for diagnostics/RMA. Avoid repair or reformat attempts until you have an image if data is critical.

How journalists and admins should treat model lists and anecdote pools​

  • Treat community “hit lists” as investigatory leads, not vendor confirmations. They are useful for triage (e.g., testing fleets with the exact models and firmware listed) but are not definitive. (igorslab.de)
  • Cross‑validate: if you see a model on multiple independent lists (Igor’s Lab, Notebookcheck, gHacks, citizen testers), prioritize testing that model in your staging environment. If only a single tester reports a model, consider it lower confidence. (igorslab.de, notebookcheck.net)
  • Check firmware dates and motherboard BIOS versions. Affected behaviour often depends on firmware interplay; a newer controller firmware may already include a fix that prevents the condition from occurring.

Technical analysis: where the plausible fault lines run​

Three technical hypotheses currently explain the phenomenon, and they are not mutually exclusive:
  • 1) Host‑side buffering/HMB allocation regression: Windows changes the size or lifetime of Host Memory Buffer allocations or kernel buffering behavior under certain update code paths, and some controller firmwares mishandle the changed memory mapping leading to controller lockup. This explains the historical HMB‑related failures and some reproduce patterns.
  • 2) Driver timing / command queue regression: The update subtly changes driver timing, NVMe command sequencing, or completion handling, and some controllers encounter a race or resource exhaustion that causes an unrecoverable hang. The mid‑write disappearance and unreadable SMART point at a controller that has stopped responding to the host. (guru3d.com)
  • 3) A firmware bug that is only visible under an I/O profile produced by the update: in this case the update is a trigger rather than a cause; vulnerable firmware always had the bug but typical OS behaviour rarely exposed it until this specific workload and timing appeared. This model fits scenarios where only certain firmware versions are affected and vendor firmware fixes resolve the issue. (tomshardware.com)
Determining which of these (or what combination) is responsible requires vendor telemetry and controlled lab traces (Bus/PCIe captures, NVMe command logs, kernel traces). The community reproductions provide credible trigger profiles, which accelerate vendor root‑cause work but fall short of the final forensic data that vendors and Microsoft can collect.

Risk assessment — short and medium term​

  • Short term (days): localized risk for users performing heavy writes on affected controller/firmware combinations. Reboot often restores visibility but not necessarily data, so data loss is plausible during the affected window. (notebookcheck.net)
  • Medium term (weeks): vendor firmware fixes and Microsoft mitigations are the most likely path. Historically, controller vendors issue firmware and Microsoft can apply rollout controls or KIRs; combined action usually brings risk back to baseline levels. Phison’s public engagement suggests this collaborative remediation path is underway. (tomshardware.com, guru3d.com)
  • Systemic risk (long term): low for the global installed base unless telemetry shows a mass outage. Current evidence shows a reproducible but not universal regression. The principal risk remains data integrity for affected writes, which makes prudent backup and staging policy the right response.

What to watch next (signals that change the story)​

  • Formal Microsoft Release Health / KB update that lists a storage regression or a Known Issue Rollback for KB5063878.
  • Phison or other controller vendors publishing a detailed advisory and firmware index of affected controller models and fixed revisions. Phison’s general statement shows they are investigating; a model/firmware‑specific advisory would be the next step. (tomshardware.com)
  • Widespread, independent telemetry demonstrating mass failures beyond enthusiast labs. That would indicate a major distribution problem and change the operational posture from “caution” to “emergency recall/rollback.”

Final analysis and editorial stance​

The emerging evidence paints a plausible, reproducible storage regression linked to a Windows 11 cumulative update, targeting a narrow workload profile (sustained tens‑of‑GB sequential writes) and disproportionately affecting certain controller/firmware families in community tests. The vendor acknowledgment (Phison) raises the confidence that the phenomenon is real and impactful for some users. (tomshardware.com)
At the same time, the data is not yet conclusive enough to brand the KB as a global “drive‑killer.” Drive failures happen frequently in normal operation; correlation without context risks misattribution. The right reading of the evidence is that a specific update introduced or exposed an I/O profile that can trigger controller firmware corner cases on some hardware — a serious, actionable issue — but not a systemic, irreparable failure across all SSDs.
Until vendors and Microsoft publish coordinated telemetry and firmware lists, the sensible posture for users and admins is: assume a non‑zero risk, protect data, and delay non‑critical deployments of KB5063878 where heavy write workloads exist. That conservatism minimizes the real harm — file corruption and unrecoverable data loss — while allowing vendor‑led fixes to be deployed in a controlled way. (guru3d.com, igorslab.de)

Quick checklist (what every Windows user should do right now)​

  • Back up anything you can’t afford to lose.
  • If you perform large transfers, delay installing KB5063878 until vendor guidance is available.
  • If already updated and you must do big writes, monitor your drives and avoid prolonged sequential transfers.
  • Check for vendor firmware updates and apply them only after backing up.
  • For enterprises, stage the update and consider temporary compatibility holds for vulnerable models. (guru3d.com)
Conclusion: this is a meaningful storage story — not necessarily a mass hardware apocalypse — and its primary victims are data worth protecting and admins who manage large fleets. The technicians, testers, and vendors who have reproduced and acknowledged the fault have done the hard work of proving a reproducible trigger; now the ecosystem’s job is to translate that evidence into firmware and rollout fixes while users protect their data and avoid risky large writes until those fixes arrive. (tomshardware.com)

Source: PC Gamer A new report claims Windows 11 update is breaking SSDs and HDDs, but this could just be routine hardware failures
 

Back
Top