Quick summary
CVE-2026-31642 is a Linux kernel vulnerability in the rxrpc networking subsystem. The issue is not a typical remote-code-execution bug; it is a kernel concurrency/list-handling flaw where an RxRPC call was removed from the global rxnet->calls list with the wrong list primitive. The vulnerable code used list_del_init() where RCU-safe deletion was required. Under concurrent access, especially while /proc/net/rxrpc/calls is being read, this could corrupt the reader’s view of the list and potentially cause the proc-file reader to enter an infinite loop.The fix changes RxRPC call removal to use
list_del_rcu() and simplifies the surrounding cleanup logic so that rxrpc_put_call() becomes the single place where calls are removed from the list. The patch also limits diagnostic dumping in rxrpc_destroy_all_calls() to the first ten unexpected calls, avoiding the need to iterate indefinitely or manipulate the list from that cleanup path.As of the CVE publication information supplied, NVD had marked the record as “Awaiting Enrichment,” with no NVD CVSS 4.0, CVSS 3.x, or CVSS 2.0 score yet assigned. The source of the record is
kernel.org, and Microsoft’s Security Update Guide also has an entry for the CVE.For most administrators, the practical response is straightforward:
- Update to a kernel build that includes the upstream/stable RxRPC fix.
- Check whether
rxrpc,rxkad,kafs, or AFS-related functionality is in use. - If RxRPC/AFS is not required and an immediate kernel update is not possible, consider unloading or blacklisting the relevant modules after testing.
- Treat scanner results carefully until distributions publish their own affected-version ranges, because Linux vendors often backport fixes without changing the upstream kernel version number.
What RxRPC is
RxRPC is a remote procedure call protocol implementation in the Linux kernel. In Linux, it is exposed through theAF_RXRPC address family and is used by both userspace and in-kernel consumers. The most common reason administrators encounter RxRPC is through AFS-related functionality, especially the in-kernel AFS client, sometimes referred to as kafs.RxRPC sits on top of UDP and provides a session/call model. Instead of a simple one-packet request and one-packet response, RxRPC tracks calls, connections, peers, transport endpoints, retransmission behavior, security state, and call lifecycle transitions. That means the kernel has to maintain internal data structures representing active and recently active calls.
The vulnerable area concerns the internal list of RxRPC calls associated with an RxRPC network namespace object, referred to in the CVE description as
rxnet->calls. This list can be exposed for diagnostic visibility through:/proc/net/rxrpc/callsThat proc entry is useful for debugging and inspection because it lets the kernel present information about currently tracked RxRPC calls. The problem is that proc-file readers may be walking the list at the same time another part of the kernel is removing a call from it. That is exactly the type of situation where the kernel often uses RCU.
What RCU means in this context
RCU stands for Read-Copy-Update. It is a synchronization mechanism used heavily inside the Linux kernel when a data structure is read frequently but modified less often. The key idea is that readers can traverse a structure cheaply while writers update or remove elements in a way that does not immediately break concurrent readers.With normal linked-list deletion, the deleted entry’s pointers may be reinitialized or poisoned. That is helpful for catching bugs in non-RCU code, but it is dangerous if an RCU reader might still be following those pointers. For an RCU-protected list, deletion has to preserve enough list structure for concurrent readers to finish safely. That is why the kernel has RCU-aware list primitives such as:
list_del_rcu()instead of ordinary deletion helpers such as:
Code:
list_del()
list_del_init()
list_del_init() removes an entry and reinitializes the node so that it appears as an empty self-linked list. That behavior is useful when code later wants to test whether the node is still on a list. But if another CPU is traversing the list under RCU, changing the removed node’s pointers in that way can confuse the reader. In the CVE’s case, that confusion could affect reads of /proc/net/rxrpc/calls.The fix uses
list_del_rcu(), which is appropriate for RCU-protected list traversal. However, list_del_rcu() has an important side effect for the surrounding code: after RCU deletion, the deleted entry does not behave like a freshly initialized empty list node. In other words, the old list_empty()-based logic is no longer reliable for detecting whether that call has already been removed. The patch therefore changes the cleanup model rather than simply replacing one function call and leaving the rest untouched.The core bug
The bug can be summarized as an unsafe mismatch between how the list is read and how it is modified.The list of RxRPC calls is read in a way that expects RCU-safe behavior. But call removal used
list_del_init(). If a proc-file reader was walking /proc/net/rxrpc/calls while another CPU removed a call from rxnet->calls, the reader could see list pointers in an unexpected state. The CVE description specifically says this could “stuff up” reading /proc/net/rxrpc/calls and potentially cause an infinite loop.That is a denial-of-service style failure mode. A kernel thread or userspace process reading the proc entry may spin or hang while traversing a corrupted view of the list. Depending on scheduler behavior, CPU availability, and where the loop occurs, the system impact could range from a stuck diagnostic command to more visible kernel or system instability.
The vulnerability description does not describe privilege escalation, arbitrary code execution, information disclosure, or remote compromise. It describes incorrect RCU-safe deletion and a possible infinite loop while reading a diagnostic proc interface. Until distribution advisories or additional research prove otherwise, it is best to classify this as a kernel availability/stability issue rather than a direct confidentiality or integrity compromise.
Why /proc/net/rxrpc/calls matters
Many kernel CVEs involve rare code paths, but proc interfaces are significant because they often expose internal kernel state to diagnostic tools, monitoring agents, support scripts, and sometimes unprivileged local users depending on permissions and namespace configuration.The proc file involved here is:
/proc/net/rxrpc/callsThat file reflects RxRPC call state. If RxRPC is not built, not loaded, or not in use, the file may not exist. If RxRPC is active, reading it may traverse the list that the patch changes.
A simple read might look like this:
cat /proc/net/rxrpc/callsor:
sudo cat /proc/net/rxrpc/callsMonitoring or debugging tools could also read the file indirectly. The concern is not that reading the file is malicious by itself; it is that reading it concurrently with call teardown could expose the list-deletion bug.
The issue becomes more relevant on systems where RxRPC calls are created and destroyed while administrators or tools inspect
/proc/net/rxrpc/calls. A system using AFS or RxRPC-based services could naturally have call churn. A system with no RxRPC usage may have a much smaller practical exposure.The fix at a code-design level
The upstream fix has two connected parts.First, RxRPC call removal from
rxnet->calls is changed to use:list_del_rcu()rather than:
list_del_init()That aligns the writer side with the reader side. If readers are walking the list under RCU, writers should remove entries with RCU-safe list operations.
Second, because
list_del_rcu() does not leave the entry in a normal “empty list” state, the old approach of using list_empty() to infer prior deletion no longer works. The patch changes ownership of deletion so that rxrpc_put_call() unconditionally deletes the call from the list, and it becomes the only deletion point.The patch also changes
rxrpc_destroy_all_calls(). Instead of trying to walk and manipulate the full list of remaining calls during destruction, it now only dumps the first ten unexpected calls. That matters because a destruction/debug path should not become another source of unsafe list mutation or unbounded iteration. By limiting diagnostic output, the fix avoids needing cond_resched() there and avoids removing calls from the list in that path.The important lesson is that the patch is not merely a one-line substitution. It adjusts the lifecycle assumptions around RxRPC call objects so that RCU deletion and call cleanup remain internally consistent.
Why this became a CVE
The Linux kernel project has been assigning CVEs for many bug fixes that resolve security-relevant behavior. Kernel CVEs can sometimes look mundane compared with application CVEs because they often describe memory ordering, locking, reference counting, list traversal, teardown races, and error-path cleanup. But those are exactly the kinds of bugs that can become serious in kernel space.In this case, the identified security-relevant behavior is a potential infinite loop while reading a proc interface due to unsafe list deletion. Even if the practical exploitability is narrow, a local user or workload that can trigger the relevant conditions could affect system availability. Kernel infinite loops are generally security relevant because they can cause denial of service.
The record was received from
kernel.org, and the NVD entry was published on April 24, 2026. At the time reflected in the supplied details, NVD had not yet enriched the CVE with a CVSS vector or CWE mapping. That is common for new Linux kernel CVEs: the initial record often contains the upstream commit description and references, while vendor advisories and scoring arrive later.Likely impact
The most likely impact is local denial of service or system instability related to reading/proc/net/rxrpc/calls while RxRPC calls are being removed.Potential symptoms could include:
- A
cat, monitoring process, or diagnostic command against/proc/net/rxrpc/callshanging. - High CPU usage caused by a process stuck reading the proc file.
- Kernel soft lockup warnings if a CPU spins long enough.
- RCU stall warnings in some scenarios.
- System sluggishness if the loop consumes CPU or ties up kernel execution.
- Difficulty unloading or tearing down RxRPC/AFS functionality if call cleanup paths are involved.
- Remote code execution.
- Local privilege escalation.
- Direct kernel memory disclosure.
- Container escape.
- Filesystem corruption.
- Cryptographic compromise.
Who is most likely affected
Systems are more likely to be relevant if they meet one or more of these conditions:- The kernel has
CONFIG_AF_RXRPCenabled. - The
rxrpcmodule is loaded or can be auto-loaded. - The system uses the in-kernel AFS client.
- AFS-related modules such as
kafsor RxRPC security modules such asrxkadare present and used. - Monitoring, debugging, or support tools read
/proc/net/rxrpc/calls. - Users or workloads can create RxRPC sockets.
- The kernel version is from a branch that contains the vulnerable code and has not received the stable fix or vendor backport.
- RxRPC support is not compiled into the kernel.
- RxRPC is built as a module but not loaded and cannot be auto-loaded by unprivileged activity.
- No AFS or RxRPC functionality is used.
- Access to relevant proc files and local shell access is tightly controlled.
- The vendor kernel has already backported the fix.
Checking whether RxRPC is present
Start by checking the running kernel:
Code:
uname -a
uname -r
lsmod | grep -E 'rxrpc|rxkad|kafs'You can also inspect
/proc/modules:grep -E 'rxrpc|rxkad|kafs' /proc/modulesCheck whether the proc diagnostic path exists:
Code:
ls -l /proc/net/rxrpc 2>/dev/null
ls -l /proc/net/rxrpc/calls 2>/dev/null
/proc/config.gz, check for AF_RXRPC:zgrep CONFIG_AF_RXRPC /proc/config.gzMany distributions store kernel configs under
/boot:grep CONFIG_AF_RXRPC /boot/config-$(uname -r)Possible results include:
Code:
CONFIG_AF_RXRPC=y
CONFIG_AF_RXRPC=m
# CONFIG_AF_RXRPC is not set
CONFIG_AF_RXRPC=ymeans RxRPC is built directly into the kernel.CONFIG_AF_RXRPC=mmeans RxRPC is available as a loadable module.# CONFIG_AF_RXRPC is not setmeans the running kernel was not built with RxRPC support.
Code:
modinfo rxrpc 2>/dev/null
modinfo rxkad 2>/dev/null
modinfo kafs 2>/dev/null
Checking whether your vendor has patched it
Because distributions backport kernel fixes,uname -r alone is not enough. A vendor kernel may keep the same apparent upstream version while carrying hundreds or thousands of patches.For Debian or Ubuntu-style systems, useful checks include:
Code:
apt list --installed 'linux-image*'
apt-cache policy linux-image-$(uname -r)
apt changelog linux-image-$(uname -r) | grep -i -E 'CVE-2026-31642|rxrpc|list_del_rcu'For RHEL, CentOS Stream, Fedora, Oracle Linux, Rocky Linux, AlmaLinux, or SUSE-style RPM systems:
Code:
rpm -q kernel
rpm -q --changelog kernel | grep -i -E 'CVE-2026-31642|rxrpc|list_del_rcu'
dnf:
Code:
dnf updateinfo list --cve CVE-2026-31642
dnf updateinfo info --cve CVE-2026-31642
Code:
zypper patches | grep -i CVE-2026-31642
zypper lp --cve=CVE-2026-31642
Code:
pacman -Q linux
pacman -Qi linux
Code:
uname -r
cat /etc/os-release
Microsoft, Windows, WSL, and Azure considerations
The source provided is a Microsoft Security Update Guide entry, but the vulnerable component is the Linux kernel. That distinction matters.A normal Windows installation is not affected merely because MSRC lists the CVE. Windows does not use the Linux kernel as its host kernel. However, Microsoft environments can still be relevant in several cases:
- WSL2 distributions run on a Microsoft-provided Linux kernel.
- Azure Linux, Azure Kubernetes Service nodes, or Linux VMs may use kernels managed through Microsoft or distribution channels.
- Defender, vulnerability management tools, or MSRC tracking may surface Linux kernel CVEs for inventory visibility.
- Mixed Windows/Linux enterprises may see this CVE in Microsoft security dashboards even though remediation occurs on Linux assets.
Code:
wsl --update
wsl --shutdown
uname -aFor Azure Linux VMs or AKS nodes, remediation depends on the image family and update channel. In many cases, the correct response is to update the node image, apply the vendor kernel update, and reboot or roll nodes so the new kernel is actually running.
Immediate mitigation if you cannot patch
The best fix is a patched kernel. Kernel bugs should not be treated as permanently mitigated by configuration unless the affected feature can be completely disabled and kept disabled.If RxRPC/AFS is not required, you may be able to unload related modules:
Code:
sudo modprobe -r kafs
sudo modprobe -r rxkad
sudo modprobe -r rxrpc
To see dependencies:
lsmod | grep -E 'rxrpc|rxkad|kafs'To prevent future loading, create a modprobe blacklist file. For example:
Code:
sudo tee /etc/modprobe.d/disable-rxrpc.conf >/dev/null <<'EOF'
blacklist rxrpc
blacklist rxkad
blacklist kafs
install rxrpc /bin/false
install rxkad /bin/false
install kafs /bin/false
EOF
sudo update-initramfs -uOn many RHEL/Fedora-style systems:
sudo dracut -fReboot and verify:
lsmod | grep -E 'rxrpc|rxkad|kafs'Important cautions:
- Do not blacklist these modules if you use AFS or any workload that depends on RxRPC.
- Test on non-production systems first.
- Blacklisting may not help if RxRPC is built into the kernel with
CONFIG_AF_RXRPC=y. - Module blacklisting is a mitigation, not a substitute for installing the patched kernel.
Operational risk in containers and Kubernetes
This CVE is a host-kernel issue. Containers do not carry their own Linux kernel. If a Kubernetes node kernel is vulnerable, every pod on that node shares the same kernel, even if the container image is fully patched.The practical risk depends on whether workloads can interact with RxRPC or relevant proc entries. Hardened clusters may reduce exposure through:
- Dropping unnecessary Linux capabilities.
- Running containers as non-root.
- Using seccomp and AppArmor/SELinux policies.
- Restricting access to host
/proc. - Avoiding privileged containers.
- Avoiding hostPID and hostNetwork unless required.
- Preventing arbitrary module loading.
- Keeping node images updated.
For Kubernetes administrators, a reasonable response plan is:
- Identify affected node OS images and kernel builds.
- Check whether vendor advisories map CVE-2026-31642 to your node image.
- Roll out patched node images or kernel packages.
- Reboot or replace nodes so the patched kernel is running.
- Confirm with
uname -rand vendor package metadata after the rollout. - Consider disabling RxRPC-related modules on node pools that do not need them.
Severity assessment before NVD scoring
Because NVD had not assigned a CVSS score in the supplied details, administrators should avoid inventing a precise severity number. A sensible interim assessment is:- Impact type: Availability.
- Likely attack position: Local or same-system context.
- Affected component: Linux kernel RxRPC subsystem.
- Exploit requirement: Ability to reach RxRPC call creation/removal and/or read the relevant proc diagnostic interface under the right race conditions.
- Most likely result: Hang, spin, or denial-of-service condition during proc list traversal.
- Known public scoring: Not yet provided by NVD in the supplied record.
A practical prioritization model:
- High priority: Systems using AFS/RxRPC, shared Linux servers, container hosts, exposed multi-tenant environments, systems where untrusted users can run code.
- Medium priority: General-purpose servers with RxRPC available but no known active use.
- Lower priority: Systems where RxRPC is not compiled, not loadable, or blocked by policy.
- Still patch during normal cycle: Any Linux system that will receive vendor kernel updates anyway.
What administrators should monitor
If you suspect exposure or are waiting for maintenance, watch for symptoms that could match the failure mode.Check kernel logs:
dmesg -T | grep -i -E 'rxrpc|rcu|soft lockup|stall|watchdog|BUG'With
journalctl:journalctl -k | grep -i -E 'rxrpc|rcu|soft lockup|stall|watchdog|BUG'Look for stuck processes reading proc files:
Code:
ps auxww | grep rxrpc
ps auxww | grep '/proc/net/rxrpc'
Code:
top
htop
pidstat 1
/proc/net/rxrpc/calls, killing it may or may not resolve the condition, depending on where it is stuck. If the kernel is spinning in a non-interruptible path, a reboot may be required. That is one reason availability-class kernel bugs deserve attention even when they do not involve code execution.Patch validation
After applying a vendor kernel update, reboot. Merely installing the package is not enough.Check the running kernel:
uname -rCheck booted kernel packages:
rpm -q kernelor:
dpkg -l | grep linux-imageThen verify the system is no longer running the old kernel. On many systems, admins install a new kernel but forget to reboot, leaving the vulnerable kernel active for days or weeks.
You can check boot time:
Code:
uptime
who -b
- Kernel package installed.
- Patched kernel selected in bootloader.
- Patched kernel actually running.
- Old vulnerable kernel still present but not running.
- Old vulnerable kernel still bootable as fallback.
Why scanners may disagree
Linux kernel CVE detection is notoriously noisy. Different tools may report different results for CVE-2026-31642 because they use different data sources and matching logic.Common causes of disagreement include:
- Vendor backports that fix the bug without changing the upstream version string.
- CVE records that are published before NVD enrichment.
- Kernel packages split across multiple binary packages.
- Cloud images using provider-specific kernels.
- Livepatch systems applying fixes in memory.
- Containers being scanned as if their image kernel package mattered.
- Distribution advisories lagging behind upstream stable commits.
- Custom kernels with local patches.
Guidance for security teams
Security teams should handle CVE-2026-31642 as a kernel availability vulnerability with environment-dependent exposure.Recommended triage questions:
- Do we run Linux kernels with
CONFIG_AF_RXRPCenabled? - Is
rxrpcbuilt in or loadable as a module? - Are
rxrpc,rxkad, orkafscurrently loaded anywhere? - Do we use AFS or any service depending on RxRPC?
- Can untrusted users run local code on affected systems?
- Can containers access relevant proc interfaces or trigger module loading?
- Do monitoring tools read
/proc/net/rxrpc/calls? - Has our Linux vendor published a fixed kernel?
- Are patched kernels actually running after reboot?
- Can we safely disable RxRPC/AFS on systems that do not need it?
Apply the vendor kernel update that includes the RxRPC RCU-safe call-list deletion fix for CVE-2026-31642, then reboot into the patched kernel. If immediate patching is not possible and RxRPC/AFS is not required, disable or unload RxRPC-related modules after compatibility testing.
Guidance for Linux maintainers and developers
For kernel developers, the CVE is a reminder that RCU list APIs must be used consistently. If readers traverse a list under RCU, writers must use RCU-safe mutation primitives and object lifetime must be managed so that readers cannot observe invalid or reinitialized list pointers.The subtle part is that changing
list_del_init() to list_del_rcu() can break code that previously relied on list_empty() after deletion. That is not a reason to keep the unsafe primitive. It means the object lifecycle should be redesigned so there is a clear deletion owner and no need to infer deletion state from a list node that is intentionally left in an RCU-compatible state.The RxRPC fix follows that pattern:
- Use
list_del_rcu()for RCU-safe list removal. - Stop relying on
list_empty()for post-deletion detection. - Make
rxrpc_put_call()the single deletion path. - Limit destruction-time diagnostics rather than mutating the list from a debug cleanup path.
Practical remediation checklist
Use this checklist for a Linux server or fleet:
Code:
# 1. Identify kernel
uname -a
uname -r
# 2. Check RxRPC config
zgrep CONFIG_AF_RXRPC /proc/config.gz 2>/dev/null || grep CONFIG_AF_RXRPC /boot/config-$(uname -r)
# 3. Check loaded modules
lsmod | grep -E 'rxrpc|rxkad|kafs'
# 4. Check proc interface
ls -l /proc/net/rxrpc /proc/net/rxrpc/calls 2>/dev/null
# 5. Check vendor package status
rpm -q kernel 2>/dev/null || dpkg -l | grep linux-image
# 6. Search changelogs where supported
rpm -q --changelog kernel 2>/dev/null | grep -i -E 'CVE-2026-31642|rxrpc|list_del_rcu'
apt changelog linux-image-$(uname -r) 2>/dev/null | grep -i -E 'CVE-2026-31642|rxrpc|list_del_rcu'
# 7. Update using your vendor's package manager
# Debian/Ubuntu example:
sudo apt update && sudo apt upgrade
# RHEL/Fedora-style example:
sudo dnf update
# 8. Reboot
sudo reboot
# 9. Confirm new kernel is running
uname -r
sudo modprobe -r kafs rxkad rxrpcThen consider blacklisting only after confirming no required service depends on it.
Bottom line
CVE-2026-31642 is a Linux kernel RxRPC bug caused by unsafe removal of RxRPC call objects from an RCU-read list. The immediate failure mode described is an infinite loop while reading/proc/net/rxrpc/calls, making this primarily an availability and kernel-stability issue. The upstream fix changes deletion to list_del_rcu() and restructures cleanup so that the call is removed in one consistent place.The absence of an NVD score at initial publication should not lead administrators to ignore it. The right response is to patch through your Linux vendor, reboot into the fixed kernel, and disable RxRPC-related modules where they are not needed and where doing so is operationally safe.
Source: NVD / Linux Kernel Security Update Guide - Microsoft Security Response Center