Microsoft’s December Patch Tuesday cumulative updates have broken Microsoft Message Queuing (MSMQ) for many enterprise environments, and — unusually — the vendor’s public guidance directs affected organizations to open a business support case rather than publish a one‑size‑fits‑all mitigation in the KB notes.
Microsoft shipped its December 9, 2025 cumulative updates for multiple Windows SKUs, including KB5071546 for Windows 10 ESU/22H2 and companion updates for Windows Server branches. Within days, enterprise administrators reported a consistent regression: MSMQ queues going inactive, IIS‑hosted sites and applications failing with “Insufficient resources to perform operation” exceptions, and event log errors showing the OS cannot create MSMQ storage files. Microsoft updated the KB articles to list Message Queuing (MSMQ) as a known issue and explicitly stated it is investigating the cause while offering a workaround only through Microsoft Support for business customers. This article explains the technical root cause as documented by Microsoft and observed by admins, walks through the practical triage options (with concrete steps), and evaluates the operational and security trade‑offs enterprises face while waiting for a permanent fix.
The operational catch: the two pragmatic mitigations in play — rollback the LCU or change NTFS ACLs — are both imperfect. Rolling back restores availability but also removes the security fixes installed by the LCU; loosening ACLs restores availability but widens the attack surface on a system folder unless the change is precisely scoped and tightly controlled. Microsoft’s guidance to contact business support before applying a mitigation reflects the vendor’s intent to avoid publishing a generic permission tweak that might be misapplied and increase risk.
Source: TechRadar Having Windows app issues? Microsoft is making businesses reach out directly to get a fix
Background / Overview
Microsoft shipped its December 9, 2025 cumulative updates for multiple Windows SKUs, including KB5071546 for Windows 10 ESU/22H2 and companion updates for Windows Server branches. Within days, enterprise administrators reported a consistent regression: MSMQ queues going inactive, IIS‑hosted sites and applications failing with “Insufficient resources to perform operation” exceptions, and event log errors showing the OS cannot create MSMQ storage files. Microsoft updated the KB articles to list Message Queuing (MSMQ) as a known issue and explicitly stated it is investigating the cause while offering a workaround only through Microsoft Support for business customers. This article explains the technical root cause as documented by Microsoft and observed by admins, walks through the practical triage options (with concrete steps), and evaluates the operational and security trade‑offs enterprises face while waiting for a permanent fix.What broke, exactly?
The symptoms administrators are seeing
- MSMQ queues appear inactive and refuse new messages; producers receive exceptions instead of successful enqueues.
- IIS‑hosted applications that post to MSMQ throw System.Messaging.MessageQueueException errors such as “Insufficient resources to perform operation,” which surface as HTTP 500s in production sites.
- Event logs show failures creating message files: “The message file 'C:\Windows\System32\MSMQ\storage*.mq' cannot be created.”
- Some systems log misleading diagnostics like “There is insufficient disk space or memory” despite there being adequate resources — the underlying failure is not resource exhaustion but an access denial.
The root cause (what Microsoft says)
Microsoft attributes the regression to changes introduced to the MSMQ security model and to the NTFS permissions on the MSMQ storage folderC:\Windows\System32\MSMQ\storage. The update hardened or altered the folder’s Access Control List (ACL) inheritance/flags so that service identities which previously could write there (IIS app pool identities, LocalService, NetworkService, or named service accounts) may no longer have the explicit write rights they need. When those identities lack write permission, MSMQ file creation and append operations fail and callers receive resource‑style errors rather than an explicit access denied message. Microsoft’s KBs explicitly point to this permission change as the observable cause. Why this matters for enterprise deployments
MSMQ remains widely used in legacy line‑of‑business systems, on‑prem integration middleware, and IIS‑hosted web apps that rely on durable, on‑disk message persistence. When MSMQ fails to persist messages, entire processing pipelines can block, leading to queued background work failing, order processing halting, or telemetry ingestion dropping. Clustered MSMQ environments are particularly sensitive: when multiple nodes lose write access simultaneously under load, failover behavior can be destabilized and recovery protracted.The operational catch: the two pragmatic mitigations in play — rollback the LCU or change NTFS ACLs — are both imperfect. Rolling back restores availability but also removes the security fixes installed by the LCU; loosening ACLs restores availability but widens the attack surface on a system folder unless the change is precisely scoped and tightly controlled. Microsoft’s guidance to contact business support before applying a mitigation reflects the vendor’s intent to avoid publishing a generic permission tweak that might be misapplied and increase risk.
Cross‑checking the facts
This regression is documented directly in the Microsoft KB articles for the affected packages: Windows 10 KB5071546, Windows Server 2019 KB5071544, Windows Server 2016 KB5071543, and corresponding rollups. Independent trade press and incident reporting from community Q&A threads corroborate the timeline, symptoms, and root‑cause analysis: BleepingComputer and multiple Microsoft Q&A threads report the same permission change and the vendor’s recommendation to contact business support for the workaround. Community triage has produced two repeatable mitigations — rollback or narrowly scoped ACL changes — which also appear in community posts and triage notes. Where reporting diverges: press and community discussions have suggested ad‑hoc ACL changes as a stopgap (for example, granting write toIIS_IUSRS or NETWORK SERVICE), but Microsoft did not publish a generic ACL recipe in the KB and prefers that customers use business support channels to obtain mitigations tailored to their environment. That guidance is explicit in the KBs. Practical triage and mitigation options (for IT admins)
The following section outlines a step‑by‑step triage checklist and practical mitigations with clear operational caveats. These are triage steps: always test in an isolated lab, document changes, and coordinate with security and change‑control teams.Detect — confirm you’re affected
- Confirm the KBs are installed: run
Get-HotFixor check Windows Update history for KB5071546 / KB5071544 / KB5071543. - Look for MSMQ-related exceptions in application logs and Windows Event Viewer:
System.Messaging.MessageQueueException, “Insufficient resources to perform operation,” and the storage file errors referencingC:\Windows\System32\MSMQ\storage*.mq. - Verify MSMQ queue state using
Get‑MSMQQueue(or vendor tools) and check whether queues are marked inactive. Community triage notes document that queues show as inactive when the permission change is in effect.
Inspect the MSMQ folder ACLs (diagnostic commands)
- Use PowerShell to capture the current ACL:
Get-Acl -Path 'C:\Windows\System32\MSMQ\storage' | Format-List- Compare ACLs on a patched system versus a known good baseline (if available) or a test VM that does not have the December LCU installed. Several community posts show the patched SDDL differing in an Auto‑Inherited (AI) flag that removes write access for service identities.
Mitigation option A — rollback the LCU (when acceptable)
- If your environment allows, schedule a maintenance window and roll back the combined SSU+LCU package using DISM
/Remove-Packageor follow the KB uninstall guidance. Test rollback on a pilot server first since uninstalling the combined SSU+LCU can be nontrivial. Microsoft KBs include guidance on how to identify the package names for removal. - After rollback, verify MSMQ behavior and queue writes resume normally. If rollback restores function, treat the rollback as a temporary measure while a vendor fix is deployed; document compensating security controls (segmentation, IDS/IPS signatures, additional logging) because rollback reintroduces the vulnerabilities the LCU fixed.
Mitigation option B — apply a narrowly scoped ACL change (higher operational risk)
If rollback is not possible, community triage and vendor comments indicate that granting explicit, minimally scoped write permissions to the exact service identities that need MSMQ storage access can restore functionality. Typical examples used in lab triage include:- Test ACL grant (lab only — adjust to your identity names and inheritance flags):
icacls "C:\Windows\System32\MSMQ\storage" /grant "IIS_IUSRS:(OI)(CI)(M)"icacls "C:\Windows\System32\MSMQ\storage" /grant "NT AUTHORITY\NETWORK SERVICE:(OI)(CI)(M)"- Restart the MSMQ service:
net stop msmq && net start msmqand validate application behavior.
- Grant the least privilege necessary (do not give broad write access to Everyone or Users).
- Document precisely which accounts were changed and why.
- Apply changes to a representative test system and run a functional validation plan that includes failover and cluster testing.
- Plan to revert permission changes once Microsoft publishes the official fix. Community triage strongly recommends treating this as a temporary measure and obtaining guidance from Microsoft Support.
Mitigation option C — open a Microsoft Support for business case (the vendor’s recommended path)
Microsoft’s KBs state that a workaround is available for affected devices but instruct administrators to contact Microsoft Support for business to apply the workaround and mitigate the issue in their organizations. Opening a formal support case gives you a tailored mitigation, a record for compliance/audit, and visibility into when Microsoft will ship an official hotfix. This is the approach the KB explicitly endorses.Decision matrix for IT leaders: how to choose
- Is the affected host business‑critical (payments, order processing, telemetry ingestion)?
- Yes → prioritize rapid availability: apply the narrow ACL fix in a controlled manner or rollback temporarily, but open a support case immediately. Document compensating controls if you roll back.
- No → consider delaying broad rollback; quarantine and test.
- Do you have a robust change‑control and testing pipeline?
- Yes → build a lab replica, reproduce the failure, and validate ACL recipes and rollback procedures before touching production.
- Is compliance or audit status impacted by a rollback (e.g., rollback removes security fixes your compliance posture depends on)?
- Yes → ACL workaround plus compensating controls may be preferable to rollback; however, seek Microsoft support guidance to document the trade‑offs.
Security and operational trade‑offs — a candid analysis
- Rollback restores functionality but reopens the vulnerability window the LCU resolved. That’s not just theoretical: these monthly cumulative updates often include significant security fixes. Reintroducing exposures can be unacceptable in high‑risk environments.
- Modifying NTFS ACLs on
C:\Windows\System32\MSMQ\storageis a live system change to a sensitive OS folder. A poorly scoped permission can enlarge the attack surface and allow code running under a service identity to write into locations it previously could not, potentially aiding post‑exploit persistence or elevation paths. Any ACL change must be narrowly scoped, temporary, and fully documented. - Not acting (waiting without mitigation) risks business downtime and data loss in production workflows and can cascade into customer impact and financial penalties. For many organizations the business continuity risk outweighs the incremental security risk of a carefully controlled ACL change. The right decision depends on risk tolerance, exposure, and compensating controls.
Recommended immediate action plan (concise checklist)
- Inventory: Identify servers and apps that have MSMQ installed and which applications enqueue messages.
- Detection: Search logs for “Insufficient resources to perform operation” and MSMQ storage file creation errors.
- Test: Reproduce the problem in a lab by installing the same LCU and validating the MSMQ failure pattern.
- Support: Open a Microsoft Support for business case and request the official mitigation; attach diagnostics, Event Viewer exports,
Get-Acloutput, and reproduction steps. Microsoft explicitly recommends contacting business support for the workaround. - Mitigate: If you cannot wait for support guidance and rollback is not possible, apply a minimally scoped ACL change in test and roll to production with strict logging and monitoring. Document compensation and removal plan to re‑apply secure ACLs post‑hotfix.
- Communicate: Inform application owners and customers about potential service degradation and the mitigation path selected. Keep an incident timeline and case ID for compliance audits.
Monitoring Microsoft’s fix timeline — what to expect
As of the KB updates, Microsoft has not published an ETA for an official fix and the issue is listed as “under investigation.” That means admins should expect either a hotfix / OOB update or a January Patch Tuesday cumulative update to include the resolution, but no concrete date is available from Microsoft at the time of writing. Opening a business support case is the only guaranteed route to receive a tailored mitigation or an early hotfix when available.Long‑term lessons and strategic recommendations
- Inventory and reduce reliance on legacy middleware where feasible. MSMQ is mature but legacy; consider migrating critical messaging workloads to actively maintained platforms such as Azure Service Bus, Kafka, RabbitMQ, or managed cloud messaging services where long‑term vendor support is clearer.
- Harden update testing: maintain a staging channel that installs cumulative updates before production, and run automated compatibility tests for critical middleware and IIS apps after each monthly LCU in a controlled pilot window. A staged rollout (canary → pilot → broad) prevents mass outage exposure.
- Improve change documentation: when updates alter system ACLs, KB notes should include a clear mitigation recipe; vendors should avoid security hardenings that break legitimate service identities without providing an explicit compatibility shim or documented ACL migration path. This incident underscores the need for better communication between vendors, patch teams, and operations.
Final takeaways
- The December 9, 2025 Patch Tuesday LCUs introduced an MSMQ regression caused by changes to the MSMQ security model and NTFS ACLs on
C:\Windows\System32\MSMQ\storage. Microsoft has documented the problem in the KBs and recommends affected organizations contact Microsoft Support for business to obtain the workaround. - Practical mitigations available in the wild are: (A) rollback the LCU; or (B) apply a narrowly scoped NTFS ACL grant to the identities that require write access. Both options carry trade‑offs: rollback reopens security exposures, and ACL changes increase the attack surface unless applied cautiously and temporarily.
- For enterprise IT, the immediate priorities are to detect affected systems, escalate to Microsoft business support for an official mitigation, and if necessary apply tightly controlled, reversible measures while documenting compensating security controls.
Source: TechRadar Having Windows app issues? Microsoft is making businesses reach out directly to get a fix