RE: [RFC PATCH 0/6] Implement initial CXL Timeout & Isolation support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ add linux-mm because the implications of this feature weigh much
  heavier on mm/ than drivers/ ]

Ben Cheatham wrote:
> Implement initial support for CXL.mem Timeout & Isolation (CXL 3.0
> 12.3.2). This series implements support for CXL.mem enabling and
> programming CXL.mem transaction timeout, CXL.mem error isolation,
> and error isolation interrupts for CXL-enabled PCIe root ports that
> implement the CXL Timeout & Isolation capability.
> 
> I am operating under the assumption that recovery from error isolation
> will be more involved than just resetting the port and turning off
> isolation, so that flow is not implemented here.

That needs to be answered first.

The notification infrastructure is trivial in comparison. The use case
needs to be clear *before* we start adding infrastructure to Linux that
may end up just being dead code, and certainly before we further burden
portdrv.c with more entanglements.

I put together the write-up below in a separate context for my thoughts
on the error isolation capability and that Linux really needs an end
user to stand up and say, "yes, even with all those caveats CXL Error
Isolation is still useful for our environment." The tl;dr is "are you
abosolutely sure you would not just rather reboot?"

> There is also no support for CXL.cache, but I plan to eventually
> implement both.

To date there are zero upstream drivers for CXL.cache initiators, and
CXL 3.0 introduced HDM-DB to supplant HDM-D. All that to say, if there
are zero mass-market (devices with upstream drivers) adopters of HDM-D,
and there are zero HDM-DB platforms available today it seems Linux has
time for this to continue to evolve. If anyone reading this has a
CXL.cache initiator driver waiting in the wings, do reach out on the
list to clarify what commons belong in the Linux CXL and/or PCI core.

> The series also introduces a PCIe port bus driver dependency on the CXL
> core. I expect to be able to remove that when when my team submits
> patches for a future rework of the PCIe port bus driver.

We have time to wait for that work to settle. Do not make the job harder
in the near term by adding one more dependency to unwind.

> I have done some testing using QEMU by adding the isolation registers
> and a hacked-up QMP command to test the interrupt flow, but I *DID NOT*
> implement the actual isolation feature and the subsequent device
> behavior. I'd be willing to share these changes (and my config) if
> anyone is interested in testing this.
> 
> Any thoughts/comments would be greatly appreciated!

---
Memory Error Isolation is a mechanism that *might* allow recovery of CXL
Root Port Link Down conditions or CXL transaction timeouts. When the
event occurs, outstanding writes are dropped, and outstanding reads
terminate with machine check. In order to exit Isolation, the link must
transition through a "link down" status. Effectively Isolation behaves
like a combination of a machine check storm until system software can
evacuate all users and then a surprise hot-removal (unplug) of the
memory before the device can be recovered. Where recovery is effectively
a hot re-plug of the device with a full re-enumeration thereafter. From
the Linux perspective all memory contents are considered forfeited. This
poses several challenges for how to utilize the memory to achieve
reliable recovery. The criteria for evaluating the Linux upstream
maintenance cost for overcoming those challenges is whether the sum
total of those mitigations remains an improvement over a system-reboot
to recover from the same Isolation event. Add to that the fact that not
fully accounting for all the mentioned challenges is still going to
result in a reboot due to kernel panic.

In order to understand the limits of Isolation recovery relative to
reboot recovery, it is important to understand the fundamental
limitations of Linux machine check recovery and memory-hotremove.
Machine checks are typically only recoverable when they hit user memory.
Roughly, if a machine check occurs in a kernel mapped page the machine
check handler triggers a kernel panic. Machine checks are often
recoverable because failures are limited to a few cachelines at a time
and the page allocation distribution is heavily skewed towards
user-memory.

CXL Isolation takes down an entire CXL root port, and with interleaving
can lead to a region spanning multiple ports to be taken down. Even if
the interleave configuration forfeited bandwidth to contain isolation
events to a single CXL root port, it is still on the order of 100s of
GBs that will start throwing machine checks on access all at once. If
that memory is being used by the kernel as typical System-RAM, some of
it is likely to be kernel mapped. Marking all CXL memory as ZONE_MOVABLE
to avoid kernel allocations is not a reliable mitigation for this
scenario as the kernel always needs some ratio of ZONE_NORMAL to access
ZONE_MOVABLE memory.

The "kernel memory" problem similarly effects surprise removal events.
The kernel can only reliably unplug memory that it can force offline and
there is no facility to force-offline ZONE_NORMAL memory. Long term
memory pinning like guests VMs with assigned devices or RDMA, or even
short-term pins from transient device DMA, can hold off memory removal
indefinitely which means recovery may be held off indefinitely. For
System-RAM there is no facility to notify all users to evacuate, instead
the memory hot-removal code walks every page in the range to be offlined
and aborts if any single page has an elevated reference count.

It follows from the above that Isolation requires that the CXL memory
ranges to be recovered must never be online as System-RAM, the kernel
cannot offer typical memory management services for memory that is
subject to "surprise removal". Instead, device-dax is a facility that
has some properties that may allow recovery to be reliable. The
device-dax facility arranges for a given memory range to be mappable via
a device-file. This effectively allows userspace management of that
memory, but at the cost of application changes.

If, for example, the intended use case of Isolation capable CXL memory
is to place VMs that can die during an Isolation event while keeping the
rest of the system up, then that could be achieved with device-dax. For
example, allocate a device-dax instance per-VM and specify it as a
"memory-backend-file" to qemu-kvm.

Again the loss of typical core mm memory semantics and the need for
application changes raises the question of whether reboot is preferred
over Isolation recovery. Unlike System-RAM that supports anonymous
mappings disconnected from the physical memory device, device-dax
implements file-backed mappings which includes methods to reverse-map
all users and revoke their access to the memory range that has gone into
Isolation.
---

So if someone says, "yes I can tolerate losing a root port at a time and
I can tolerate deploying my workloads with userspace memory management,
and this is preferable to a reboot", then maybe Linux should entertain
CXL Error Isolation. Until such an end use case gains clear uptake it
seems too early to worry about plumbing the notification mechanism.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux