On 11/10/19 7:21 PM, Ira Weiny wrote: > On Fri, Nov 08, 2019 at 06:04:34PM -0800, John Hubbard wrote: >> And get rid of the mmap_sem calls, as part of that. Note >> that get_user_pages_fast() will, if necessary, fall back to >> __gup_longterm_unlocked(), which takes the mmap_sem as needed. >> >> Cc: Jason Gunthorpe <jgg@xxxxxxxx> >> Cc: Ira Weiny <ira.weiny@xxxxxxxxx> > > Reviewed-by: Ira Weiny <ira.weiny@xxxxxxxxx> > Thanks for the review, Ira! This will show up shortly, in the v3 series of "mm/gup: track dma-pinned pages: FOLL_PIN, FOLL_LONGTERM". thanks, -- John Hubbard NVIDIA >> Signed-off-by: John Hubbard <jhubbard@xxxxxxxxxx> >> --- >> drivers/infiniband/core/umem.c | 17 ++++++----------- >> 1 file changed, 6 insertions(+), 11 deletions(-) >> >> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c >> index 24244a2f68cc..3d664a2539eb 100644 >> --- a/drivers/infiniband/core/umem.c >> +++ b/drivers/infiniband/core/umem.c >> @@ -271,16 +271,13 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, >> sg = umem->sg_head.sgl; >> >> while (npages) { >> - down_read(&mm->mmap_sem); >> - ret = get_user_pages(cur_base, >> - min_t(unsigned long, npages, >> - PAGE_SIZE / sizeof (struct page *)), >> - gup_flags | FOLL_LONGTERM, >> - page_list, NULL); >> - if (ret < 0) { >> - up_read(&mm->mmap_sem); >> + ret = get_user_pages_fast(cur_base, >> + min_t(unsigned long, npages, >> + PAGE_SIZE / >> + sizeof(struct page *)), >> + gup_flags | FOLL_LONGTERM, page_list); >> + if (ret < 0) >> goto umem_release; >> - } >> >> cur_base += ret * PAGE_SIZE; >> npages -= ret; >> @@ -288,8 +285,6 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, >> sg = ib_umem_add_sg_table(sg, page_list, ret, >> dma_get_max_seg_size(context->device->dma_device), >> &umem->sg_nents); >> - >> - up_read(&mm->mmap_sem); >> } >> >> sg_mark_end(sg); >> -- >> 2.24.0 >> >