This code was using get_user_pages*(), in a "Case 2" scenario (DMA/RDMA), using the categorization from [1]. That means that it's time to convert the get_user_pages*() + put_page() calls to pin_user_pages*() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. Note that this effectively changes the code's behavior as well: it now ultimately calls set_page_dirty_lock(), instead of SetPageDirty().This is probably more accurate. As Christoph Hellwig put it, "set_page_dirty() is only safe if we are dealing with a file backed page where we have reference on the inode it hangs off." [3] Also, this deletes one of the two FIXME comments (about refcounting), because there is nothing wrong with the refcounting at this point. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ [3] https://lore.kernel.org/r/20190723153640.GB720@xxxxxx Cc: "Kai Mäkisara" <Kai.Makisara@xxxxxxxxxxx> Cc: James E.J. Bottomley <jejb@xxxxxxxxxxxxx> Cc: Martin K. Petersen <martin.petersen@xxxxxxxxxx> Cc: linux-scsi@xxxxxxxxxxxxxxx Signed-off-by: John Hubbard <jhubbard@xxxxxxxxxx> --- drivers/scsi/st.c | 19 +++++-------------- 1 file changed, 5 insertions(+), 14 deletions(-) diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c index c5f9b348b438..0369c7edfd94 100644 --- a/drivers/scsi/st.c +++ b/drivers/scsi/st.c @@ -4922,7 +4922,7 @@ static int sgl_map_user_pages(struct st_buffer *STbp, unsigned long end = (uaddr + count + PAGE_SIZE - 1) >> PAGE_SHIFT; unsigned long start = uaddr >> PAGE_SHIFT; const int nr_pages = end - start; - int res, i, j; + int res, i; struct page **pages; struct rq_map_data *mdata = &STbp->map_data; @@ -4944,7 +4944,7 @@ static int sgl_map_user_pages(struct st_buffer *STbp, /* Try to fault in all of the necessary pages */ /* rw==READ means read from drive, write into memory area */ - res = get_user_pages_fast(uaddr, nr_pages, rw == READ ? FOLL_WRITE : 0, + res = pin_user_pages_fast(uaddr, nr_pages, rw == READ ? FOLL_WRITE : 0, pages); /* Errors and no page mapped should return here */ @@ -4964,8 +4964,7 @@ static int sgl_map_user_pages(struct st_buffer *STbp, return nr_pages; out_unmap: if (res > 0) { - for (j=0; j < res; j++) - put_page(pages[j]); + unpin_user_pages(pages, res); res = 0; } kfree(pages); @@ -4977,18 +4976,10 @@ static int sgl_map_user_pages(struct st_buffer *STbp, static int sgl_unmap_user_pages(struct st_buffer *STbp, const unsigned int nr_pages, int dirtied) { - int i; + /* FIXME: cache flush missing for rw==READ */ - for (i=0; i < nr_pages; i++) { - struct page *page = STbp->mapped_pages[i]; + unpin_user_pages_dirty_lock(STbp->mapped_pages, nr_pages, dirtied); - if (dirtied) - SetPageDirty(page); - /* FIXME: cache flush missing for rw==READ - * FIXME: call the correct reference counting function - */ - put_page(page); - } kfree(STbp->mapped_pages); STbp->mapped_pages = NULL; -- 2.26.2