On Wed, Jun 23, 2021 at 09:10:05AM -0300, Jason Gunthorpe wrote: > On Wed, Jun 23, 2021 at 08:23:17AM +0300, Leon Romanovsky wrote: > > On Tue, Jun 22, 2021 at 10:18:16AM -0300, Jason Gunthorpe wrote: > > > On Tue, Jun 22, 2021 at 02:39:42PM +0300, Leon Romanovsky wrote: > > > > > > > diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c > > > > index 0eb40025075f..a76ef6a6bac5 100644 > > > > +++ b/drivers/infiniband/core/umem.c > > > > @@ -51,11 +51,11 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d > > > > struct scatterlist *sg; > > > > unsigned int i; > > > > > > > > - if (umem->nmap > 0) > > > > - ib_dma_unmap_sg(dev, umem->sg_head.sgl, umem->sg_nents, > > > > - DMA_BIDIRECTIONAL); > > > > + if (dirty) > > > > + ib_dma_unmap_sgtable_attrs(dev, &umem->sg_head, > > > > + DMA_BIDIRECTIONAL, 0); > > > > > > > > - for_each_sg(umem->sg_head.sgl, sg, umem->sg_nents, i) > > > > + for_each_sgtable_dma_sg(&umem->sg_head, sg, i) > > > > unpin_user_page_range_dirty_lock(sg_page(sg), > > > > DIV_ROUND_UP(sg->length, PAGE_SIZE), make_dirty); > > > > > > This isn't right, can't mix sg_page with a _dma_ API > > > > Jason, why is that? > > > > We use same pages that were passed to __sg_alloc_table_from_pages() in __ib_umem_get(). > > A sgl has two lists inside it a 'dma' list and a 'page' list, they are > not the same length and not interchangable. > > If you use for_each_sgtable_dma_sg() then you iterate over the 'dma' > list and have to use 'dma' accessors > > If you use for_each_sgtable_sg() then you iterate over the 'page' list > and have to use 'page' acessors > > Mixing dma iteration with page accessors or vice-versa, like above, is > always a bug. > > You can see it alos because the old code used umem->sg_nents which is > the CPU list length while this new code is using the dma list length. Ohh, I see difference between types now, thank you for the explanation, will consult with Maor once he returns next week to the office. Thanks > > Jason