RE: [PATCH RFC 05/12] RDMA/cxgb4: Use for_each_sg_dma_page iterator on umem SGL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: linux-rdma-owner@xxxxxxxxxxxxxxx <linux-rdma-
> owner@xxxxxxxxxxxxxxx> On Behalf Of Shiraz Saleem
> Sent: Saturday, January 26, 2019 10:59 AM
> To: dledford@xxxxxxxxxx; jgg@xxxxxxxx; linux-rdma@xxxxxxxxxxxxxxx
> Cc: Shiraz, Saleem <shiraz.saleem@xxxxxxxxx>; Steve Wise
> <swise@xxxxxxxxxxx>
> Subject: [PATCH RFC 05/12] RDMA/cxgb4: Use for_each_sg_dma_page
> iterator on umem SGL
> 
> From: "Shiraz, Saleem" <shiraz.saleem@xxxxxxxxx>
> 
> Use the for_each_sg_dma_page iterator variant to walk the umem
> DMA-mapped SGL and get the page DMA address. This avoids the extra
> loop to iterate pages in the SGE when for_each_sg iterator is used.
> 
> Additionally, purge umem->page_shift usage in the driver
> as its only relevant for ODP MRs. Use system page size and
> shift instead.

Hey Shiraz, Doesn't umem->page_shift allow registering huge pages
efficiently?  IE is umem->page_shift set for the 2MB shift if the memory in
this umem region is from the 2MB huge page pool? 

> 
> Cc: Steve Wise <swise@xxxxxxxxxxx>
> Signed-off-by: Shiraz, Saleem <shiraz.saleem@xxxxxxxxx>
> ---
>  drivers/infiniband/hw/cxgb4/mem.c | 33 ++++++++++++++-------------------
>  1 file changed, 14 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/cxgb4/mem.c
> b/drivers/infiniband/hw/cxgb4/mem.c
> index 96760a3..a9cd6f1 100644
> --- a/drivers/infiniband/hw/cxgb4/mem.c
> +++ b/drivers/infiniband/hw/cxgb4/mem.c
> @@ -502,10 +502,9 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd,
> u64 start, u64 length,
>  			       u64 virt, int acc, struct ib_udata *udata)
>  {
>  	__be64 *pages;
> -	int shift, n, len;
> -	int i, k, entry;
> +	int shift, n, i;
>  	int err = -ENOMEM;
> -	struct scatterlist *sg;
> +	struct sg_dma_page_iter sg_iter;
>  	struct c4iw_dev *rhp;
>  	struct c4iw_pd *php;
>  	struct c4iw_mr *mhp;
> @@ -541,7 +540,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd,
> u64 start, u64 length,
>  	if (IS_ERR(mhp->umem))
>  		goto err_free_skb;
> 
> -	shift = mhp->umem->page_shift;
> +	shift = PAGE_SHIFT;
> 
>  	n = mhp->umem->nmap;
>  	err = alloc_pbl(mhp, n);
> @@ -556,21 +555,17 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd,
> u64 start, u64 length,
> 
>  	i = n = 0;
> 
> -	for_each_sg(mhp->umem->sg_head.sgl, sg, mhp->umem->nmap,
> entry) {
> -		len = sg_dma_len(sg) >> shift;
> -		for (k = 0; k < len; ++k) {
> -			pages[i++] = cpu_to_be64(sg_dma_address(sg) +
> -						 (k << shift));
> -			if (i == PAGE_SIZE / sizeof *pages) {
> -				err = write_pbl(&mhp->rhp->rdev,
> -				      pages,
> -				      mhp->attr.pbl_addr + (n << 3), i,
> -				      mhp->wr_waitp);
> -				if (err)
> -					goto pbl_done;
> -				n += i;
> -				i = 0;
> -			}
> +	for_each_sg_dma_page(mhp->umem->sg_head.sgl, &sg_iter, mhp-
> >umem->nmap, 0) {
> +		pages[i++] =
> cpu_to_be64(sg_page_iter_dma_address(&sg_iter));
> +		if (i == PAGE_SIZE / sizeof *pages) {
> +			err = write_pbl(&mhp->rhp->rdev,
> +					pages,
> +					mhp->attr.pbl_addr + (n << 3), i,
> +					mhp->wr_waitp);
> +			if (err)
> +				goto pbl_done;
> +			n += i;
> +			i = 0;
>  		}
>  	}
> 
> --
> 1.8.3.1




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux