On Wed, Apr 03, 2019 at 03:54:42PM -0300, Jason Gunthorpe wrote: > On Tue, Apr 02, 2019 at 02:52:52PM -0500, Shiraz Saleem wrote: > > Combine contiguous regions of PAGE_SIZE pages > > into single scatter list entries while adding > > to the scatter table. This minimizes the number > > of the entries in the scatter list and reduces > > the DMA mapping overhead, particularly with the > > IOMMU. > > > > Set default max_seg_size in core for IB devices > > to 2G and do not combine if we exceed this limit. > > > > Also, purge npages in struct ib_umem as we now > > DMA map the umem SGL with sg_nents, and update > > remaining non ODP drivers that use umem->npages. > > Move npages tracking to ib_umem_odp as ODP drivers > > still need it. > > > > This patch should be applied post > > https://patchwork.kernel.org/cover/10857607/ > > > > Suggested-by: Jason Gunthorpe <jgg@xxxxxxxx> > > Reviewed-by: Michael J. Ruhl <michael.j.ruhl@xxxxxxxxx> > > Reviewed-by: Ira Weiny <ira.weiny@xxxxxxxxx> > > Acked-by: Adit Ranadive <aditr@xxxxxxxxxx> > > Signed-off-by: Shiraz Saleem <shiraz.saleem@xxxxxxxxx> > > Tested-by: Gal Pressman <galpress@xxxxxxxxxx> > > Tested-by: Selvin Xavier <selvin.xavier@xxxxxxxxxxxx> > > Okay, lets go with this! > > The other patches only impact the two drivers, so lets respin them and > go ahead too. Hm. Shiraz, I just noticed this: --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -380,8 +380,8 @@ int ib_umem_copy_from(void *dst, struct ib_umem *umem, size_t offset, return -EINVAL; } - ret = sg_pcopy_to_buffer(umem->sg_head.sgl, ib_umem_num_pages(umem), - dst, length, offset + ib_umem_offset(umem)); + ret = sg_pcopy_to_buffer(umem->sg_head.sgl, umem->sg_nents, dst, length, + offset + ib_umem_offset(umem)); if (ret < 0) return ret; Yes? Jason