> Subject: [PATCH 04/14] RDMA/umem: Add rdma_umem_for_each_dma_block() > > This helper does the same as rdma_for_each_block(), except it works on a umem. > This simplifies most of the call sites. > > Signed-off-by: Jason Gunthorpe <jgg@xxxxxxxxxx> > --- [...] > diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c > b/drivers/infiniband/hw/i40iw/i40iw_verbs.c > index b51339328a51ef..beb611b157bc8d 100644 > --- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c > +++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c > @@ -1320,8 +1320,7 @@ static void i40iw_copy_user_pgaddrs(struct i40iw_mr > *iwmr, > if (iwmr->type == IW_MEMREG_TYPE_QP) > iwpbl->qp_mr.sq_page = sg_page(region->sg_head.sgl); > > - rdma_for_each_block(region->sg_head.sgl, &biter, region->nmap, > - iwmr->page_size) { > + rdma_umem_for_each_dma_block(region, &biter, iwmr->page_size) { > *pbl = rdma_block_iter_dma_address(&biter); > pbl = i40iw_next_pbl_addr(pbl, &pinfo, &idx); > } Acked-by: Shiraz Saleem <shiraz.saleem@xxxxxxxxx> [....] > +static inline void __rdma_umem_block_iter_start(struct ib_block_iter *biter, > + struct ib_umem *umem, > + unsigned long pgsz) > +{ > + __rdma_block_iter_start(biter, umem->sg_head.sgl, umem->nmap, pgsz); } > + > +/** > + * rdma_umem_for_each_dma_block - iterate over contiguous DMA blocks of > +the umem > + * @umem: umem to iterate over > + * @pgsz: Page size to split the list into > + * > + * pgsz must be <= PAGE_SIZE or computed by ib_umem_find_best_pgsz(). >= ?