> From: Jason Wang [mailto:jasowang@xxxxxxxxxx] > Sent: Friday, October 25, 2019 5:49 PM > > > On 2019/10/24 下午8:34, Liu Yi L wrote: > > Shared virtual address (SVA), a.k.a, Shared virtual memory (SVM) on Intel > > platforms allow address space sharing between device DMA and > applications. > > > Interesting, so the below figure demonstrates the case of VM. I wonder > how much differences if we compare it with doing SVM between device > and > an ordinary process (e.g dpdk)? > > Thanks One difference is that ordinary process requires only stage-1 translation, while VM requires nested translation. > > > > SVA can reduce programming complexity and enhance security. > > This series is intended to expose SVA capability to VMs. i.e. shared guest > > application address space with passthru devices. The whole SVA > virtualization > > requires QEMU/VFIO/IOMMU changes. This series includes the QEMU > changes, for > > VFIO and IOMMU changes, they are in separate series (listed in the > "Related > > series"). > > > > The high-level architecture for SVA virtualization is as below: > > > > .-------------. .---------------------------. > > | vIOMMU | | Guest process CR3, FL only| > > | | '---------------------------' > > .----------------/ > > | PASID Entry |--- PASID cache flush - > > '-------------' | > > | | V > > | | CR3 in GPA > > '-------------' > > Guest > > ------| Shadow |--------------------------|-------- > > v v v > > Host > > .-------------. .----------------------. > > | pIOMMU | | Bind FL for GVA-GPA | > > | | '----------------------' > > .----------------/ | > > | PASID Entry | V (Nested xlate) > > '----------------\.------------------------------. > > | | |SL for GPA-HPA, default domain| > > | | '------------------------------' > > '-------------' > > Where: > > - FL = First level/stage one page tables > > - SL = Second level/stage two page tables > > > > The complete vSVA upstream patches are divided into three phases: > > 1. Common APIs and PCI device direct assignment > > 2. Page Request Services (PRS) support > > 3. Mediated device assignment > > > > This RFC patchset is aiming for the phase 1. Works together with the VT-d > > driver[1] changes and VFIO changes[2]. > > > > Related series: > > [1] [PATCH v6 00/10] Nested Shared Virtual Address (SVA) VT-d support: > > https://lkml.org/lkml/2019/10/22/953 > > <This series is based on this kernel series from Jacob Pan> > > > > [2] [RFC v2 0/3] vfio: support Shared Virtual Addressing from Yi Liu > > > > There are roughly four parts: > > 1. Introduce IOMMUContext as abstract layer between vIOMMU > emulator and > > VFIO to avoid direct calling between the two > > 2. Passdown PASID allocation and free to host > > 3. Passdown guest PASID binding to host > > 4. Passdown guest IOMMU cache invalidation to host > > > > The full set can be found in below link: > > https://github.com/luxis1999/qemu.git: sva_vtd_v6_qemu_rfc_v2 > > > > Changelog: > > - RFC v1 -> v2: > > Introduce IOMMUContext to abstract the connection between > VFIO > > and vIOMMU emulator, which is a replacement of the > PCIPASIDOps > > in RFC v1. Modify x-scalable-mode to be string option instead of > > adding a new option as RFC v1 did. Refined the pasid cache > management > > and addressed the TODOs mentioned in RFC v1. > > RFC v1: https://patchwork.kernel.org/cover/11033657/ > > > > Eric Auger (1): > > update-linux-headers: Import iommu.h > > > > Liu Yi L (20): > > header update VFIO/IOMMU vSVA APIs against 5.4.0-rc3+ > > intel_iommu: modify x-scalable-mode to be string option > > vfio/common: add iommu_ctx_notifier in container > > hw/pci: modify pci_setup_iommu() to set PCIIOMMUOps > > hw/pci: introduce pci_device_iommu_context() > > intel_iommu: provide get_iommu_context() callback > > vfio/pci: add iommu_context notifier for pasid alloc/free > > intel_iommu: add virtual command capability support > > intel_iommu: process pasid cache invalidation > > intel_iommu: add present bit check for pasid table entries > > intel_iommu: add PASID cache management infrastructure > > vfio/pci: add iommu_context notifier for pasid bind/unbind > > intel_iommu: bind/unbind guest page table to host > > intel_iommu: replay guest pasid bindings to host > > intel_iommu: replay pasid binds after context cache invalidation > > intel_iommu: do not passdown pasid bind for PASID #0 > > vfio/pci: add iommu_context notifier for PASID-based iotlb flush > > intel_iommu: process PASID-based iotlb invalidation > > intel_iommu: propagate PASID-based iotlb invalidation to host > > intel_iommu: process PASID-based Device-TLB invalidation > > > > Peter Xu (1): > > hw/iommu: introduce IOMMUContext > > > > hw/Makefile.objs | 1 + > > hw/alpha/typhoon.c | 6 +- > > hw/arm/smmu-common.c | 6 +- > > hw/hppa/dino.c | 6 +- > > hw/i386/amd_iommu.c | 6 +- > > hw/i386/intel_iommu.c | 1249 > +++++++++++++++++++++++++++++++++++++-- > > hw/i386/intel_iommu_internal.h | 109 ++++ > > hw/i386/trace-events | 6 + > > hw/iommu/Makefile.objs | 1 + > > hw/iommu/iommu.c | 66 +++ > > hw/pci-host/designware.c | 6 +- > > hw/pci-host/ppce500.c | 6 +- > > hw/pci-host/prep.c | 6 +- > > hw/pci-host/sabre.c | 6 +- > > hw/pci/pci.c | 27 +- > > hw/ppc/ppc440_pcix.c | 6 +- > > hw/ppc/spapr_pci.c | 6 +- > > hw/s390x/s390-pci-bus.c | 8 +- > > hw/vfio/common.c | 10 + > > hw/vfio/pci.c | 149 +++++ > > include/hw/i386/intel_iommu.h | 58 +- > > include/hw/iommu/iommu.h | 113 ++++ > > include/hw/pci/pci.h | 13 +- > > include/hw/pci/pci_bus.h | 2 +- > > include/hw/vfio/vfio-common.h | 9 + > > linux-headers/linux/iommu.h | 324 ++++++++++ > > linux-headers/linux/vfio.h | 83 +++ > > scripts/update-linux-headers.sh | 2 +- > > 28 files changed, 2232 insertions(+), 58 deletions(-) > > create mode 100644 hw/iommu/Makefile.objs > > create mode 100644 hw/iommu/iommu.c > > create mode 100644 include/hw/iommu/iommu.h > > create mode 100644 linux-headers/linux/iommu.h > >