On Sat, Feb 23, 2019 at 01:26:41PM -0600, Shiraz Saleem wrote: > On Tue, Feb 19, 2019 at 09:07:29PM -0700, Jason Gunthorpe wrote: > > On Tue, Feb 19, 2019 at 08:57:43AM -0600, Shiraz Saleem wrote: > > > Call the core helpers to retrieve the HW aligned address to use > > > for the MR, within a supported i40iw page size. > > > > > > Remove code in i40iw to determine when MR is backed by 2M huge pages > > > which involves checking the umem->hugetlb flag and VMA inspection. > > > The core helpers will return the 2M aligned address if the > > > MR is backed by 2M pages. > > > > > > > > - for_each_sg_dma_page (region->sg_head.sgl, &sg_iter, region->nmap, 0) { > > > - pg_addr = sg_page_iter_dma_address(&sg_iter); > > > - if (first_pg) > > > - *pbl = cpu_to_le64(pg_addr & iwmr->page_msk); > > > - else if (!(pg_addr & ~iwmr->page_msk)) > > > - *pbl = cpu_to_le64(pg_addr); > > > - else > > > - continue; > > > - > > > - first_pg = false; > > > + for (ib_umem_start_phys_iter(region, &sg_phys_iter, > > > - iwmr->page_size); > > > > Maybe this should be: > > > > for_each_sg_dma_page_sz (region->sg_head.sgl, &sg_iter, region->nmap, > > iwmr->page_size) > > > > ? > > > > Is there a reason to move away from the API we built here? > > > Yes. Its a different iterator type we need to use here and > iterate the SG list in contiguous aligned memory blocks as > opposed to PAGE_SIZE increments. I mean, why not add an option to the core api to do something other than PAGE_SIZE increments? Jason