Re: [RFC RESEND 00/16] Split IOMMU DMA mapping operation to two steps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





在 2024/3/7 7:01, Zhu Yanjun 写道:
在 2024/3/5 12:18, Leon Romanovsky 写道:
This is complimentary part to the proposed LSF/MM topic.
https://lore.kernel.org/linux-rdma/22df55f8-cf64-4aa8-8c0b-b556c867b926@xxxxxxxxx/T/#m85672c860539fdbbc8fe0f5ccabdc05b40269057

I am interested in this topic. Hope I can join the meeting to discuss this topic.


With the same idea, in the IDPF driver, the function dma_alloc_coherent which is called in the IDPF driver can be devided into the following 2 functions:

iommu_dma_alloc_pages

and

iommu_dma_map_page

So the function iommu_dma_alloc_pages allocates pages, iommu_dma_map_page makes mapping between pages and IOVA.

Now the above idea is implemented in the NIC driver. Currently it can work well.

Next the above idea will be implemented in the block device. Hope this can increase the performance of the block device.

Best Regards,
Zhu Yanjun

Zhu Yanjun


This is posted as RFC to get a feedback on proposed split, but RDMA, VFIO and DMA patches are ready for review and inclusion, the NVMe patches are still in
progress as they require agreement on API first.

Thanks

-------------------------------------------------------------------------------
The DMA mapping operation performs two steps at one same time: allocates
IOVA space and actually maps DMA pages to that space. This one shot
operation works perfectly for non-complex scenarios, where callers use
that DMA API in control path when they setup hardware.

However in more complex scenarios, when DMA mapping is needed in data
path and especially when some sort of specific datatype is involved,
such one shot approach has its drawbacks.

That approach pushes developers to introduce new DMA APIs for specific
datatype. For example existing scatter-gather mapping functions, or
latest Chuck's RFC series to add biovec related DMA mapping [1] and
probably struct folio will need it too.

These advanced DMA mapping APIs are needed to calculate IOVA size to
allocate it as one chunk and some sort of offset calculations to know
which part of IOVA to map.

Instead of teaching DMA to know these specific datatypes, let's separate
existing DMA mapping routine to two steps and give an option to advanced
callers (subsystems) perform all calculations internally in advance and
map pages later when it is needed.

In this series, three users are converted and each of such conversion
presents different positive gain:
1. RDMA simplifies and speeds up its pagefault handling for
    on-demand-paging (ODP) mode.
2. VFIO PCI live migration code saves huge chunk of memory.
3. NVMe PCI avoids intermediate SG table manipulation and operates
    directly on BIOs.

Thanks

[1] https://lore.kernel.org/all/169772852492.5232.17148564580779995849.stgit@xxxxxxxxxxxxxxxxxxxxx

Chaitanya Kulkarni (2):
   block: add dma_link_range() based API
   nvme-pci: use blk_rq_dma_map() for NVMe SGL

Leon Romanovsky (14):
   mm/hmm: let users to tag specific PFNs
   dma-mapping: provide an interface to allocate IOVA
   dma-mapping: provide callbacks to link/unlink pages to specific IOVA
   iommu/dma: Provide an interface to allow preallocate IOVA
   iommu/dma: Prepare map/unmap page functions to receive IOVA
   iommu/dma: Implement link/unlink page callbacks
   RDMA/umem: Preallocate and cache IOVA for UMEM ODP
   RDMA/umem: Store ODP access mask information in PFN
   RDMA/core: Separate DMA mapping to caching IOVA and page linkage
   RDMA/umem: Prevent UMEM ODP creation with SWIOTLB
   vfio/mlx5: Explicitly use number of pages instead of allocated length
   vfio/mlx5: Rewrite create mkey flow to allow better code reuse
   vfio/mlx5: Explicitly store page list
   vfio/mlx5: Convert vfio to use DMA link API

  Documentation/core-api/dma-attributes.rst |   7 +
  block/blk-merge.c                         | 156 ++++++++++++++
  drivers/infiniband/core/umem_odp.c        | 219 +++++++------------
  drivers/infiniband/hw/mlx5/mlx5_ib.h      |   1 +
  drivers/infiniband/hw/mlx5/odp.c          |  59 +++--
  drivers/iommu/dma-iommu.c                 | 129 ++++++++---
  drivers/nvme/host/pci.c                   | 220 +++++--------------
  drivers/vfio/pci/mlx5/cmd.c               | 252 ++++++++++++----------
  drivers/vfio/pci/mlx5/cmd.h               |  22 +-
  drivers/vfio/pci/mlx5/main.c              | 136 +++++-------
  include/linux/blk-mq.h                    |   9 +
  include/linux/dma-map-ops.h               |  13 ++
  include/linux/dma-mapping.h               |  39 ++++
  include/linux/hmm.h                       |   3 +
  include/rdma/ib_umem_odp.h                |  22 +-
  include/rdma/ib_verbs.h                   |  54 +++++
  kernel/dma/debug.h                        |   2 +
  kernel/dma/direct.h                       |   7 +-
  kernel/dma/mapping.c                      |  91 ++++++++
  mm/hmm.c                                  |  34 +--
  20 files changed, 870 insertions(+), 605 deletions(-)






[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux