On Thu, 13 Dec 2018 17:17:34 +1100 Alexey Kardashevskiy <aik@xxxxxxxxx> wrote: > POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not > pluggable PCIe devices but still have PCIe links which are used > for config space and MMIO. In addition to that the GPUs have 6 NVLinks > which are connected to other GPUs and the POWER9 CPU. POWER9 chips > have a special unit on a die called an NPU which is an NVLink2 host bus > adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each. > These systems also support ATS (address translation services) which is > a part of the NVLink2 protocol. Such GPUs also share on-board RAM > (16GB or 32GB) to the system via the same NVLink2 so a CPU has > cache-coherent access to a GPU RAM. > > This exports GPU RAM to the userspace as a new VFIO device region. This > preregisters the new memory as device memory as it might be used for DMA. > This inserts pfns from the fault handler as the GPU memory is not onlined > until the vendor driver is loaded and trained the NVLinks so doing this > earlier causes low level errors which we fence in the firmware so > it does not hurt the host system but still better be avoided; for the same > reason this does not map GPU RAM into the host kernel (usual thing for > emulated access otherwise). > > This exports an ATSD (Address Translation Shootdown) register of NPU which > allows TLB invalidations inside GPU for an operating system. The register > conveniently occupies a single 64k page. It is also presented to > the userspace as a new VFIO device region. One NPU has 8 ATSD registers, > each of them can be used for TLB invalidation in a GPU linked to this NPU. > This allocates one ATSD register per an NVLink bridge allowing passing > up to 6 registers. Due to the host firmware bug (just recently fixed), > only 1 ATSD register per NPU was actually advertised to the host system > so this passes that alone register via the first NVLink bridge device in > the group which is still enough as QEMU collects them all back and > presents to the guest via vPHB to mimic the emulated NPU PHB on the host. > > In order to provide the userspace with the information about GPU-to-NVLink > connections, this exports an additional capability called "tgt" > (which is an abbreviated host system bus address). The "tgt" property > tells the GPU its own system address and allows the guest driver to > conglomerate the routing information so each GPU knows how to get directly > to the other GPUs. > > For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to > know LPID (a logical partition ID or a KVM guest hardware ID in other > words) and PID (a memory context ID of a userspace process, not to be > confused with a linux pid). This assigns a GPU to LPID in the NPU and > this is why this adds a listener for KVM on an IOMMU group. A PID comes > via NVLink from a GPU and NPU uses a PID wildcard to pass it through. > > This requires coherent memory and ATSD to be available on the host as > the GPU vendor only supports configurations with both features enabled > and other configurations are known not to work. Because of this and > because of the ways the features are advertised to the host system > (which is a device tree with very platform specific properties), > this requires enabled POWERNV platform. > > The V100 GPUs do not advertise any of these capabilities via the config > space and there are more than just one device ID so this relies on > the platform to tell whether these GPUs have special abilities such as > NVLinks. > > Signed-off-by: Alexey Kardashevskiy <aik@xxxxxxxxx> > --- > Changes: > v5: > * do not memremap GPU RAM for emulation, map it only when it is needed > * allocate 1 ATSD register per NVLink bridge, if none left, then expose > the region with a zero size > * separate caps per device type > * addressed AW review comments > > v4: > * added nvlink-speed to the NPU bridge capability as this turned out to > be not a constant value > * instead of looking at the exact device ID (which also changes from system > to system), now this (indirectly) looks at the device tree to know > if GPU and NPU support NVLink > > v3: > * reworded the commit log about tgt > * added tracepoints (do we want them enabled for entire vfio-pci?) > * added code comments > * added write|mmap flags to the new regions > * auto enabled VFIO_PCI_NVLINK2 config option > * added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu > references; there are required by the NVIDIA driver > * keep notifier registered only for short time > --- > drivers/vfio/pci/Makefile | 1 + > drivers/vfio/pci/trace.h | 102 ++++++ > drivers/vfio/pci/vfio_pci_private.h | 14 + > include/uapi/linux/vfio.h | 39 +++ > drivers/vfio/pci/vfio_pci.c | 27 +- > drivers/vfio/pci/vfio_pci_nvlink2.c | 473 ++++++++++++++++++++++++++++ > drivers/vfio/pci/Kconfig | 6 + > 7 files changed, 660 insertions(+), 2 deletions(-) > create mode 100644 drivers/vfio/pci/trace.h > create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c > > diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile > index 76d8ec0..9662c06 100644 > --- a/drivers/vfio/pci/Makefile > +++ b/drivers/vfio/pci/Makefile > @@ -1,5 +1,6 @@ > > vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o > vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o > +vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o > > obj-$(CONFIG_VFIO_PCI) += vfio-pci.o > diff --git a/drivers/vfio/pci/trace.h b/drivers/vfio/pci/trace.h > new file mode 100644 > index 0000000..b80d2d3 > --- /dev/null > +++ b/drivers/vfio/pci/trace.h > @@ -0,0 +1,102 @@ > +/* SPDX-License-Identifier: GPL-2.0+ */ > +/* > + * VFIO PCI mmap/mmap_fault tracepoints > + * > + * Copyright (C) 2018 IBM Corp. All rights reserved. > + * Author: Alexey Kardashevskiy <aik@xxxxxxxxx> > + * > + * This program is free software; you can redistribute it and/or modify > + * it under the terms of the GNU General Public License version 2 as > + * published by the Free Software Foundation. > + */ > + > +#undef TRACE_SYSTEM > +#define TRACE_SYSTEM vfio_pci > + > +#if !defined(_TRACE_VFIO_PCI_H) || defined(TRACE_HEADER_MULTI_READ) > +#define _TRACE_VFIO_PCI_H > + > +#include <linux/tracepoint.h> > + > +TRACE_EVENT(vfio_pci_nvgpu_mmap_fault, > + TP_PROTO(struct pci_dev *pdev, unsigned long hpa, unsigned long ua, > + vm_fault_t ret), > + TP_ARGS(pdev, hpa, ua, ret), > + > + TP_STRUCT__entry( > + __field(const char *, name) > + __field(unsigned long, hpa) > + __field(unsigned long, ua) > + __field(int, ret) > + ), > + > + TP_fast_assign( > + __entry->name = dev_name(&pdev->dev), > + __entry->hpa = hpa; > + __entry->ua = ua; > + __entry->ret = ret; > + ), > + > + TP_printk("%s: %lx -> %lx ret=%d", __entry->name, __entry->hpa, > + __entry->ua, __entry->ret) > +); > + > +TRACE_EVENT(vfio_pci_nvgpu_mmap, > + TP_PROTO(struct pci_dev *pdev, unsigned long hpa, unsigned long ua, > + unsigned long size, int ret), > + TP_ARGS(pdev, hpa, ua, size, ret), > + > + TP_STRUCT__entry( > + __field(const char *, name) > + __field(unsigned long, hpa) > + __field(unsigned long, ua) > + __field(unsigned long, size) > + __field(int, ret) > + ), > + > + TP_fast_assign( > + __entry->name = dev_name(&pdev->dev), > + __entry->hpa = hpa; > + __entry->ua = ua; > + __entry->size = size; > + __entry->ret = ret; > + ), > + > + TP_printk("%s: %lx -> %lx size=%lx ret=%d", __entry->name, __entry->hpa, > + __entry->ua, __entry->size, __entry->ret) > +); > + > +TRACE_EVENT(vfio_pci_npu2_mmap, > + TP_PROTO(struct pci_dev *pdev, unsigned long hpa, unsigned long ua, > + unsigned long size, int ret), > + TP_ARGS(pdev, hpa, ua, size, ret), > + > + TP_STRUCT__entry( > + __field(const char *, name) > + __field(unsigned long, hpa) > + __field(unsigned long, ua) > + __field(unsigned long, size) > + __field(int, ret) > + ), > + > + TP_fast_assign( > + __entry->name = dev_name(&pdev->dev), > + __entry->hpa = hpa; > + __entry->ua = ua; > + __entry->size = size; > + __entry->ret = ret; > + ), > + > + TP_printk("%s: %lx -> %lx size=%lx ret=%d", __entry->name, __entry->hpa, > + __entry->ua, __entry->size, __entry->ret) > +); > + > +#endif /* _TRACE_SUBSYS_H */ > + > +#undef TRACE_INCLUDE_PATH > +#define TRACE_INCLUDE_PATH . > +#undef TRACE_INCLUDE_FILE > +#define TRACE_INCLUDE_FILE trace > + > +/* This part must be outside protection */ > +#include <trace/define_trace.h> > diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h > index 93c1738..127071b 100644 > --- a/drivers/vfio/pci/vfio_pci_private.h > +++ b/drivers/vfio/pci/vfio_pci_private.h > @@ -163,4 +163,18 @@ static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev) > return -ENODEV; > } > #endif > +#ifdef CONFIG_VFIO_PCI_NVLINK2 > +extern int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev); > +extern int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev); > +#else > +static inline int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev) > +{ > + return -ENODEV; > +} > + > +static inline int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev) > +{ > + return -ENODEV; > +} > +#endif > #endif /* VFIO_PCI_PRIVATE_H */ > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h > index 8131028..ce28d39 100644 > --- a/include/uapi/linux/vfio.h > +++ b/include/uapi/linux/vfio.h > @@ -353,6 +353,21 @@ struct vfio_region_gfx_edid { > #define VFIO_DEVICE_GFX_LINK_STATE_DOWN 2 > }; > > +/* > + * 10de vendor sub-type > + * > + * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space. > + */ > +#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM (1) > + > +/* > + * 1014 vendor sub-type > + * > + * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU > + * to do TLB invalidation on a GPU. > + */ > +#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD (1) > + > /* > * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped > * which allows direct access to non-MSIX registers which happened to be within > @@ -363,6 +378,30 @@ struct vfio_region_gfx_edid { > */ > #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE 3 > > +/* > + * Capability with compressed real address (aka SSA - small system address) > + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing. > + */ > +#define VFIO_REGION_INFO_CAP_NVIDIA_NVLINK2 4 > + > +struct vfio_region_info_cap_nvidia_nvlink2 { > + struct vfio_info_cap_header header; > + __u64 tgt; > +}; > + > +/* > + * Capability with compressed real address (aka SSA - small system address), > + * used to match the NVLink bridge with a GPU. Also contains a link speed. > + */ > +#define VFIO_REGION_INFO_CAP_IBM_NPU2 5 > + > +struct vfio_region_info_cap_ibm_npu2 { > + struct vfio_info_cap_header header; > + __u64 tgt; > + __u32 link_speed; > + __u32 __pad; > +}; Another option here would be to have one capability expose tgt used by both devices and another capability expose link_speed used only by the NPU. Perhaps VFIO_REGION_INFO_CAP_NVLINK2_SSATGT and VFIO_REGION_INFO_CAP_NVLINK2_LNKSPD. We don't necessarily need to make each capability specific to a device and we can stack multiple together. Is there some association between tgt and link_speed that requires them together? We could also simply expose the link_speed as a __u64 rather than introduce padding. Thanks, Alex