Hi Zhi,
Some comments below.
On 10/21/24 11:49, Zhi Wang wrote:
On 21/09/2024 1.34, Zhi Wang wrote:
Hi folks:
Thanks so much for the comments and discussions in the mail and
collaboration meeting. Here are the update of the major opens raised and
conclusion/next steps:
1) It is not necessary to support the multiple virtual HDM decoders for
CXL type-2 device. (Jonathan)
Was asking SW folks around about the requirement of multiple HDM
decoders in a CXL type-2 device driver. It seems one is enough, which is
reasonable, because the CXL region created by the type-2 device driver
is mostly kept for its own private use.
2) Pre-created vs post-created CXL region for the guest.
(Dan/Kevin/Alejandro)
There has been a discussion about when to create CXL region for the guest.
Option a: The pCXL region is pre-created before VM boots. When a guest
creates the CXL region via its virtual HDM decoder, QEMU maps the pCXL
region to the virtual CXL region configured by the guest. Changes and
re-configuration of the pCXL region is not expected.
Option b: The pCXL region will be (re)created when a guest creates the
CXL region via its virtual HDM decoder. QEMU traps the HDM decoder
commits, triggers the pCXL region creation, maps the pCXL to the virtual
CXL region.
Alejandro (option b):
- Will write a doc to elaborate the problem of CXL.cache and why option
b should be chosen.
Kevin (option b):
- CXL region is a SW concept, it should be controlled by the guest SW.
Dan (option a):
- Error handling when creating the pCXL region at runtime. E.g.
Available HPA in the FWMS in the host is running out when creating the
pCXL region
I think there is nothing option b can not do including any error
handling. Available HPA can change, but this is not different to this
being handled for host devices trying to get an HPA range concurrently.
- CXL.cache might need extra handling which cannot be done at runtime.
(Need to check Alejandro's doc)
Next step:
- Will check with Alejandro and start from his doc about CXL.cache concerns.
Working on it. Hopefully a first draft next week.
3) Is this exclusively a type2 extension or how do you envision type1/3
devices with vfio? (Alex)
For type-3 device passthrough, due to its nature of memory expander, CXL
folks have decided to use either virtio-mem or another stub driver in
QEMU to manage/map the memory to the guest.
For type-1 device, I am not aware of any type-1 device in the market.
Dan commented in the CXL discord group:
"my understanding is that some of the CXL FPGA kits offer Type-1 flows,
but those are for custom solutions not open-market. I am aware of some
private deployments of such hardware, but nothing with an upstream driver."
My take is that we don't need to consider support type-1 device
passthrough so far.
I can not see a difference between Type1 and Type2 regarding CXL.cache
support. Once we have a solution for Type2, that should be fine for Type1.
Thanks,
Alejandro
Z.
Hi folks:
As promised in the LPC, here are all you need (patches, repos, guiding
video, kernel config) to build a environment to test the vfio-cxl-core.
Thanks so much for the discussions! Enjoy and see you in the next one.
Background
==========
Compute Express Link (CXL) is an open standard interconnect built upon
industrial PCI layers to enhance the performance and efficiency of data
centers by enabling high-speed, low-latency communication between CPUs
and various types of devices such as accelerators, memory.
It supports three key protocols: CXL.io as the control protocol, CXL.cache
as the cache-coherent host-device data transfer protocol, and CXL.mem as
memory expansion protocol. CXL Type 2 devices leverage the three protocols
to seamlessly integrate with host CPUs, providing a unified and efficient
interface for high-speed data transfer and memory sharing. This integration
is crucial for heterogeneous computing environments where accelerators,
such as GPUs, and other specialized processors, are used to handle
intensive workloads.
Goal
====
Although CXL is built upon the PCI layers, passing a CXL type-2 device can
be different than PCI devices according to CXL specification[1]:
- CXL type-2 device initialization. CXL type-2 device requires an
additional initialization sequence besides the PCI device initialization.
CXL type-2 device initialization can be pretty complicated due to its
hierarchy of register interfaces. Thus, a standard CXL type-2 driver
initialization sequence provided by the kernel CXL core is used.
- Create a CXL region and map it to the VM. A mapping between HPA and DPA
(Device PA) needs to be created to access the device memory directly. HDM
decoders in the CXL topology need to be configured level by level to
manage the mapping. After the region is created, it needs to be mapped to
GPA in the virtual HDM decoders configured by the VM.
- CXL reset. The CXL device reset is different from the PCI device reset.
A CXL reset sequence is introduced by the CXL spec.
- Emulating CXL DVSECs. CXL spec defines a set of DVSECs registers in the
configuration for device enumeration and device control. (E.g. if a device
is capable of CXL.mem CXL.cache, enable/disable capability) They are owned
by the kernel CXL core, and the VM can not modify them.
- Emulate CXL MMIO registers. CXL spec defines a set of CXL MMIO registers
that can sit in a PCI BAR. The location of register groups sit in the PCI
BAR is indicated by the register locator in the CXL DVSECs. They are also
owned by the kernel CXL core. Some of them need to be emulated.
Design
======
To achieve the purpose above, the vfio-cxl-core is introduced to host the
common routines that variant driver requires for device passthrough.
Similar with the vfio-pci-core, the vfio-cxl-core provides common
routines of vfio_device_ops for the variant driver to hook and perform the
CXL routines behind it.
Besides, several extra APIs are introduced for the variant driver to
provide the necessary information the kernel CXL core to initialize
the CXL device. E.g., Device DPA.
CXL is built upon the PCI layers but with differences. Thus, the
vfio-pci-core is aimed to be re-used as much as possible with the
awareness of operating on a CXL device.
A new VFIO device region is introduced to expose the CXL region to the
userspace. A new CXL VFIO device cap has also been introduced to convey
the necessary CXL device information to the userspace.
Patches
=======
The patches are based on the cxl-type2 support RFCv2 patchset[2]. Will
rebase them to V3 once the cxl-type2 support v3 patch review is done.
PATCH 1-3: Expose the necessary routines required by vfio-cxl.
PATCH 4: Introduce the preludes of vfio-cxl, including CXL device
initialization, CXL region creation.
PATCH 5: Expose the CXL region to the userspace.
PATCH 6-7: Prepare to emulate the HDM decoder registers.
PATCH 8: Emulate the HDM decoder registers.
PATCH 9: Tweak vfio-cxl to be aware of working on a CXL device.
PATCH 10: Tell vfio-pci-core to emulate CXL DVSECs.
PATCH 11: Expose the CXL device information that userspace needs.
PATCH 12: An example variant driver to demonstrate the usage of
vfio-cxl-core from the perspective of the VFIO variant driver.
PATCH 13: A workaround needs suggestions.
Test
====
To test the patches and hack around, a virtual passthrough with nested
virtualization approach is used.
The host QEMU emulates a CXL type-2 accel device based on Ira's patches
with the changes to emulate HDM decoders.
While running the vfio-cxl in the L1 guest, an example VFIO variant
driver is used to attach with the QEMU CXL access device.
The L2 guest can be booted via the QEMU with the vfio-cxl support in the
VFIOStub.
In the L2 guest, a dummy CXL device driver is provided to attach to the
virtual pass-thru device.
The dummy CXL type-2 device driver can successfully be loaded with the
kernel cxl core type2 support, create CXL region by requesting the CXL
core to allocate HPA and DPA and configure the HDM decoders.
To make sure everyone can test the patches, the kernel config of L1 and
L2 are provided in the repos, the required kernel command params and
qemu command line can be found from the demostration video.[5]
Repos
=====
QEMU host: https://github.com/zhiwang-nvidia/qemu/tree/zhi/vfio-cxl-qemu-host
L1 Kernel: https://github.com/zhiwang-nvidia/linux/tree/zhi/vfio-cxl-l1-kernel-rfc
L1 QEMU: https://github.com/zhiwang-nvidia/qemu/tree/zhi/vfio-cxl-qemu-l1-rfc
L2 Kernel: https://github.com/zhiwang-nvidia/linux/tree/zhi/vfio-cxl-l2
[1] https://computeexpresslink.org/cxl-specification/
[2] https://lore.kernel.org/netdev/20240715172835.24757-1-alejandro.lucero-palau@xxxxxxx/T/
[3] https://patchew.org/QEMU/20230517-rfc-type2-dev-v1-0-6eb2e470981b@xxxxxxxxx/
[4] https://youtu.be/zlk_ecX9bxs?si=hc8P58AdhGXff3Q7
Feedback expected
=================
- Archtiecture level between vfio-pci-core and vfio-cxl-core.
- Variant driver requirements from more hardware vendors.
- vfio-cxl-core UABI to QEMU.
Moving foward
=============
- Rebase the patches on top of Alejandro's PATCH v3.
- Get Ira's type-2 emulated device patch into upstream as CXL folks and RH
folks both came to talk and expect this. I had a chat with Ira and he
expected me to take it over. Will start a discussion in the CXL discord
group for the desgin of V1.
- Sparse map in vfio-cxl-core.
Known issues
============
- Teardown path. Missing teardown paths have been implements in Alejandor's
PATCH v3. It should be solved after the rebase.
- Powerdown L1 guest instead of reboot it. The QEMU reset handler is missing
in the Ira's patch. When rebooting L1, many CXL registers are not reset.
This will be addressed in the formal review of emulated CXL type-2 device
support.
Zhi Wang (13):
cxl: allow a type-2 device not to have memory device registers
cxl: introduce cxl_get_hdm_info()
cxl: introduce cxl_find_comp_reglock_offset()
vfio: introduce vfio-cxl core preludes
vfio/cxl: expose CXL region to the usersapce via a new VFIO device
region
vfio/pci: expose vfio_pci_rw()
vfio/cxl: introduce vfio_cxl_core_{read, write}()
vfio/cxl: emulate HDM decoder registers
vfio/pci: introduce CXL device awareness
vfio/pci: emulate CXL DVSEC registers in the configuration space
vfio/cxl: introduce VFIO CXL device cap
vfio/cxl: VFIO variant driver for QEMU CXL accel device
vfio/cxl: workaround: don't take resource region when cxl is enabled.
drivers/cxl/core/pci.c | 28 ++
drivers/cxl/core/regs.c | 22 +
drivers/cxl/cxl.h | 1 +
drivers/cxl/cxlpci.h | 3 +
drivers/cxl/pci.c | 14 +-
drivers/vfio/pci/Kconfig | 6 +
drivers/vfio/pci/Makefile | 5 +
drivers/vfio/pci/cxl-accel/Kconfig | 6 +
drivers/vfio/pci/cxl-accel/Makefile | 3 +
drivers/vfio/pci/cxl-accel/main.c | 116 +++++
drivers/vfio/pci/vfio_cxl_core.c | 647 ++++++++++++++++++++++++++++
drivers/vfio/pci/vfio_pci_config.c | 10 +
drivers/vfio/pci/vfio_pci_core.c | 79 +++-
drivers/vfio/pci/vfio_pci_rdwr.c | 8 +-
include/linux/cxl_accel_mem.h | 3 +
include/linux/cxl_accel_pci.h | 6 +
include/linux/vfio_pci_core.h | 53 +++
include/uapi/linux/vfio.h | 14 +
18 files changed, 992 insertions(+), 32 deletions(-)
create mode 100644 drivers/vfio/pci/cxl-accel/Kconfig
create mode 100644 drivers/vfio/pci/cxl-accel/Makefile
create mode 100644 drivers/vfio/pci/cxl-accel/main.c
create mode 100644 drivers/vfio/pci/vfio_cxl_core.c