Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > Does the code not hold a refcount already? The attached patch will do that. Note that it's currently based on top of the patch that drops the PG_fscache alias, so it refers to PG_private_2. I've run all three patches through xfstests over afs, both with and without a cache, and Jeff has tested ceph with them. David --- commit 803a09110b41b9f6091a517fc8f5c4b15475048c Author: David Howells <dhowells@xxxxxxxxxx> Date: Wed Feb 10 11:35:15 2021 +0000 netfs: Hold a ref on a page when PG_private_2 is set Take a reference on a page when PG_private_2 is set and drop it once the bit is unlocked. Reported-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: David Howells <dhowells@xxxxxxxxxx> diff --git a/fs/netfs/read_helper.c b/fs/netfs/read_helper.c index 9018224693e9..043d96ca2aad 100644 --- a/fs/netfs/read_helper.c +++ b/fs/netfs/read_helper.c @@ -10,6 +10,7 @@ #include <linux/fs.h> #include <linux/mm.h> #include <linux/pagemap.h> +#include <linux/pagevec.h> #include <linux/slab.h> #include <linux/uio.h> #include <linux/sched/mm.h> @@ -230,10 +231,13 @@ static void netfs_rreq_completed(struct netfs_read_request *rreq) static void netfs_rreq_unmark_after_write(struct netfs_read_request *rreq) { struct netfs_read_subrequest *subreq; + struct pagevec pvec; struct page *page; pgoff_t unlocked = 0; bool have_unlocked = false; + pagevec_init(&pvec); + rcu_read_lock(); list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { @@ -247,6 +251,8 @@ static void netfs_rreq_unmark_after_write(struct netfs_read_request *rreq) continue; unlocked = page->index; unlock_page_private_2(page); + if (pagevec_add(&pvec, page) == 0) + pagevec_release(&pvec); have_unlocked = true; } } @@ -403,8 +409,10 @@ static void netfs_rreq_unlock(struct netfs_read_request *rreq) pg_failed = true; break; } - if (test_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags)) + if (test_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags)) { + get_page(page); SetPagePrivate2(page); + } pg_failed |= subreq_failed; if (pgend < iopos + subreq->len) break;