On Thu, Aug 01, 2019 at 08:07:55AM +0200, Christoph Hellwig wrote: > On Tue, Jul 30, 2019 at 01:57:03PM -0700, john.hubbard@xxxxxxxxx wrote: > > @@ -40,10 +40,7 @@ > > static void __qib_release_user_pages(struct page **p, size_t num_pages, > > int dirty) > > { > > - if (dirty) > > - put_user_pages_dirty_lock(p, num_pages); > > - else > > - put_user_pages(p, num_pages); > > + put_user_pages_dirty_lock(p, num_pages, dirty); > > } > > __qib_release_user_pages should be removed now as a direct call to > put_user_pages_dirty_lock is a lot more clear. > > > index 0b0237d41613..62e6ffa9ad78 100644 > > +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c > > @@ -75,10 +75,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty) > > for_each_sg(chunk->page_list, sg, chunk->nents, i) { > > page = sg_page(sg); > > pa = sg_phys(sg); > > - if (dirty) > > - put_user_pages_dirty_lock(&page, 1); > > - else > > - put_user_page(page); > > + put_user_pages_dirty_lock(&page, 1, dirty); > > usnic_dbg("pa: %pa\n", &pa); > > There is a pre-existing bug here, as this needs to use the sg_page > iterator. Probably worth throwing in a fix into your series while you > are at it. Sadly usnic does not use the core rdma umem abstraction but open codes an old version of it. In this version each sge in the sgl is exactly one page. See usnic_uiom_get_pages - so I think this loop is not a bug? Jason