Re: [PATCH v3 rdma-next 0/6] Introduce a DMA block iterator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 06, 2019 at 08:53:31AM -0500, Shiraz Saleem wrote:
> From: "Saleem, Shiraz" <shiraz.saleem@xxxxxxxxx>
> 
> This patch set is aiming to allow drivers to leverage a new DMA
> block iterator to get contiguous aligned memory blocks within
> their HW supported page sizes. The motivation for this work comes
> from the discussion in [1].
> 
> The first patch introduces a new umem API that allows drivers to find a
> best supported page size to use for the MR, from a bitmap of HW supported
> page sizes.
> 
> The second patch introduces a new DMA block iterator that returns allows
> drivers to get aligned DMA addresses within a supplied best page size.
> 
> The third patch and fouth patch removes the dependency of i40iw and bnxt_re
> drivers on the hugetlb flag. The new core APIs are called in these drivers to
> get huge page size aligned addresses if the MR is backed by huge pages.
> 
> The fifth patch removes the hugetlb flag from IB core.
> 
> The sixth patch extends the DMA block itertaor for HW that can support mixed
> block sizes. This patch is untested.
> 
> [1] https://patchwork.kernel.org/patch/10499753/
> 
> RFC-->v0:
> ---------
> * Add to scatter table by iterating a limited sized page list.
> * Updated driver call sites to use the for_each_sg_page iterator
>   variant where applicable.
> * Tweaked algorithm in ib_umem_find_single_pg_size and ib_umem_next_phys_iter
>   to ignore alignment of the start of first SGE and end of the last SGE.
> * Simplified ib_umem_find_single_pg_size on offset alignments checks for
>   user-space virtual and physical buffer.
> * Updated ib_umem_start_phys_iter to do some pre-computation
>   for the non-mixed page support case.
> * Updated bnxt_re driver to use the new core APIs and remove its
>   dependency on the huge tlb flag.
> * Fixed a bug in computation of sg_phys_iter->phyaddr in ib_umem_next_phys_iter.
> * Drop hugetlb flag usage from RDMA subsystem.
> * Rebased on top of for-next.
> 
> v0-->v1:
> --------
> * Remove the patches that update driver to use for_each_sg_page variant
>   to iterate in the SGE. This is sent as a seperate series using
>   the for_each_sg_dma_page variant.
> * Tweak ib_umem_add_sg_table API defintion based on maintainer feedback.
> * Cache number of scatterlist entries in umem.
> * Update function headers for ib_umem_find_single_pg_size and ib_umem_next_phys_iter.
> * Add sanity check on supported_pgsz in ib_umem_find_single_pg_size.
> 
> v1-->v2:
> --------
> *Removed page combining patch as it was sent stand alone.
> *__fls on pgsz_bitmap as opposed to fls64 since it's an unsigned long.
> *rename ib_umem_find_pg_bit() --> rdma_find_pg_bit() and moved to ib_verbs.h
> *rename ib_umem_find_single_pg_size() --> ib_umem_find_best_pgsz()
> *New flag IB_UMEM_VA_BASED_OFFSET for ib_umem_find_best_pgsz API for HW that uses least significant bits
>   of VA to indicate start offset into DMA list.
> *rdma_find_pg_bit() logic is re-written and simplified. It can support input of 0 or 1 dma addr cases.
> *ib_umem_find_best_pgsz() optimized to be less computationally expensive running rdma_find_pg_bit() only once.
> *rdma_for_each_block() is the new re-designed DMA block iterator which is more in line with for_each_sg_dma_page()iterator.
> *rdma_find_mixed_pg_bit() logic for interior SGE's accounting for start and end dma address. 
> *remove i40iw specific enums for supported page size
> *remove vma_list form ib_umem_get()
> 
> v2-->v3:
> ---------
> *Check VA/PA bits misalignment to restrict max page size for all SGL address in ib_umem_find_best_pgsz()
> *ib_umem_find_best_pgsz() extended to work with any IOVA
> *IB_UMEM_VA_BASED_OFFSET flag removed.
> *DMA block iterator API split into 2 patches. One for HW that supports single blocks and
> second which extends the API to support HW that can do mixed block sizes.
> 
> Shiraz Saleem (6):
>   RDMA/umem: Add API to find best driver supported page size in an MR
>   RDMA/verbs: Add a DMA iterator to return aligned contiguous memory
>     blocks
>   RDMA/i40iw: Use core helpers to get aligned DMA address within a
>     supported page size
>   RDMA/bnxt_re: Use core helpers to get aligned DMA address
>   RDMA/umem: Remove hugetlb flag

Applied to for-next except for:

>   RDMA/verbs: Extend DMA block iterator support for mixed block sizes

Thanks,
Jason



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux