Re: [PATCH] ceph: switch back to testing for NULL folio->private in ceph_dirty_folio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 13, 2022 at 08:48:40AM +0800, Xiubo Li wrote:
> 
> On 6/10/22 11:40 PM, Jeff Layton wrote:
> > Willy requested that we change this back to warning on folio->private
> > being non-NULl. He's trying to kill off the PG_private flag, and so we'd
> > like to catch where it's non-NULL.
> > 
> > Add a VM_WARN_ON_FOLIO (since it doesn't exist yet) and change over to
> > using that instead of VM_BUG_ON_FOLIO along with testing the ->private
> > pointer.
> > 
> > Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
> > Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
> > ---
> >   fs/ceph/addr.c          | 2 +-
> >   include/linux/mmdebug.h | 9 +++++++++
> >   2 files changed, 10 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> > index b43cc01a61db..b24d6bdb91db 100644
> > --- a/fs/ceph/addr.c
> > +++ b/fs/ceph/addr.c
> > @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
> >   	 * Reference snap context in folio->private.  Also set
> >   	 * PagePrivate so that we get invalidate_folio callback.
> >   	 */
> > -	VM_BUG_ON_FOLIO(folio_test_private(folio), folio);
> > +	VM_WARN_ON_FOLIO(folio->private, folio);
> >   	folio_attach_private(folio, snapc);
> >   	return ceph_fscache_dirty_folio(mapping, folio);

I found a couple of places where page->private needs to be NULLed out.
Neither of them are Ceph's fault.  I decided that testing whether
folio->private and PG_private are in agreement was better done in
folio_unlock() than in any of the other potential places we could
check for it.

diff --git a/mm/filemap.c b/mm/filemap.c
index 8ef861297ffb..acef71f75e78 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1535,6 +1535,9 @@ void folio_unlock(struct folio *folio)
 	BUILD_BUG_ON(PG_waiters != 7);
 	BUILD_BUG_ON(PG_locked > 7);
 	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
+	VM_BUG_ON_FOLIO(!folio_test_private(folio) &&
+			!folio_test_swapbacked(folio) &&
+			folio_get_private(folio), folio);
 	if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio, 0)))
 		folio_wake_bit(folio, PG_locked);
 }
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2e2a8b5bc567..af0751a79c19 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2438,6 +2438,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
 			page_tail);
 	page_tail->mapping = head->mapping;
 	page_tail->index = head->index + tail;
+	page_tail->private = 0;
 
 	/* Page flags must be visible before we make the page non-compound. */
 	smp_wmb();
diff --git a/mm/migrate.c b/mm/migrate.c
index eb62e026c501..fa8e36e74f0d 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1157,6 +1157,8 @@ static int unmap_and_move(new_page_t get_new_page,
 	newpage = get_new_page(page, private);
 	if (!newpage)
 		return -ENOMEM;
+	BUG_ON(compound_order(newpage) != compound_order(page));
+	newpage->private = 0;
 
 	rc = __unmap_and_move(page, newpage, force, mode);
 	if (rc == MIGRATEPAGE_SUCCESS)




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux