Re: [RFC PATCH 0/4] Use 1st-level for DMA remapping in guest

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jacob,

On 9/24/19 3:27 AM, Jacob Pan wrote:
Hi Baolu,

On Mon, 23 Sep 2019 20:24:50 +0800
Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx> wrote:

This patchset aims to move IOVA (I/O Virtual Address) translation
to 1st-level page table under scalable mode. The major purpose of
this effort is to make guest IOVA support more efficient.

As Intel VT-d architecture offers caching-mode, guest IOVA (GIOVA)
support is now implemented in a shadow page manner. The device
simulation software, like QEMU, has to figure out GIOVA->GPA mapping
and writes to a shadowed page table, which will be used by pIOMMU.
Each time when mappings are created or destroyed in vIOMMU, the
simulation software will intervene. The change on GIOVA->GPA will be
shadowed to host, and the pIOMMU will be updated via VFIO/IOMMU
interfaces.


      .-----------.
      |  vIOMMU   |
      |-----------|                 .--------------------.
      |           |IOTLB flush trap |        QEMU        |
      .-----------. (map/unmap)     |--------------------|
      | GVA->GPA  |---------------->|      .----------.  |
      '-----------'                 |      | GPA->HPA |  |
      |           |                 |      '----------'  |
      '-----------'                 |                    |
                                    |                    |
                                    '--------------------'
                                                 |
             <------------------------------------
             |
             v VFIO/IOMMU API
       .-----------.
       |  pIOMMU   |
       |-----------|
       |           |
       .-----------.
       | GVA->HPA  |
       '-----------'
       |           |
       '-----------'

In VT-d 3.0, scalable mode is introduced, which offers two level
translation page tables and nested translation mode. Regards to
GIOVA support, it can be simplified by 1) moving the GIOVA support
over 1st-level page table to store GIOVA->GPA mapping in vIOMMU,
2) binding vIOMMU 1st level page table to the pIOMMU, 3) using pIOMMU
second level for GPA->HPA translation, and 4) enable nested (a.k.a.
dual stage) translation in host. Compared with current shadow GIOVA
support, the new approach is more secure and software is simplified
as we only need to flush the pIOMMU IOTLB and possible device-IOTLB
when an IOVA mapping in vIOMMU is torn down.

      .-----------.
      |  vIOMMU   |
      |-----------|                 .-----------.
      |           |IOTLB flush trap |   QEMU    |
      .-----------.    (unmap)      |-----------|
      | GVA->GPA  |---------------->|           |
      '-----------'                 '-----------'
      |           |                       |
      '-----------'                       |
            <------------------------------
            |      VFIO/IOMMU
            |  cache invalidation and
            | guest gpd bind interfaces
            v
For vSVA, the guest PGD bind interface will mark the PASID as guest
PASID and will inject page request into the guest. In FL gIOVA case, I
guess we are assuming there is no page fault for GIOVA. I will need to
add a flag in the gpgd bind such that any PRS will be auto responded
with invalid.

There should be no page fault. The pages should have been pinned.


Also, native use of IOVA FL map is not to be supported? i.e. IOMMU API
and DMA API for native usage will continue to be SL only?

Yes. There isn't such use case as far as I can see.

Best regards,
Baolu

      .-----------.
      |  pIOMMU   |
      |-----------|
      .-----------.
      | GVA->GPA  |<---First level
      '-----------'
      | GPA->HPA  |<---Scond level
      '-----------'
      '-----------'

This patch series only aims to achieve the first goal, a.k.a using
first level translation for IOVA mappings in vIOMMU. I am sending
it out for your comments. Any comments, suggestions and concerns are
welcomed.



Based-on-idea-by: Ashok Raj <ashok.raj@xxxxxxxxx>
Based-on-idea-by: Kevin Tian <kevin.tian@xxxxxxxxx>
Based-on-idea-by: Liu Yi L <yi.l.liu@xxxxxxxxx>
Based-on-idea-by: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx>
Based-on-idea-by: Sanjay Kumar <sanjay.k.kumar@xxxxxxxxx>

Lu Baolu (4):
   iommu/vt-d: Move domain_flush_cache helper into header
   iommu/vt-d: Add first level page table interfaces
   iommu/vt-d: Map/unmap domain with mmmap/mmunmap
   iommu/vt-d: Identify domains using first level page table

  drivers/iommu/Makefile             |   2 +-
  drivers/iommu/intel-iommu.c        | 142 ++++++++++--
  drivers/iommu/intel-pgtable.c      | 342
+++++++++++++++++++++++++++++ include/linux/intel-iommu.h        |
31 ++- include/trace/events/intel_iommu.h |  60 +++++
  5 files changed, 553 insertions(+), 24 deletions(-)
  create mode 100644 drivers/iommu/intel-pgtable.c


[Jacob Pan]




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux