[PATCH mm-unstable v1] mm/hugetlb_vmemmap: fix memory loads ordering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Using x86_64 as an example, for a 32KB struct page[] area describing a
2MB hugeTLB, HVO reduces the area to 4KB by the following steps:
1. Split the (r/w vmemmap) PMD mapping the area into 512 (r/w) PTEs;
2. For the 8 PTEs mapping the area, remap PTE 1-7 to the page mapped
   by PTE 0, and at the same time change the permission from r/w to
   r/o;
3. Free the pages PTE 1-7 used to map, hence the reduction from 32KB
   to 4KB.

However, the following race can happen due to improperly memory loads
ordering:
  CPU 1 (HVO)                     CPU 2 (speculative PFN walker)

  page_ref_freeze()
  synchronize_rcu()
                                  rcu_read_lock()
                                  page_is_fake_head() is false
  vmemmap_remap_pte()
  XXX: struct page[] becomes r/o

  page_ref_unfreeze()
                                  page_ref_count() is not zero

                                  atomic_add_unless(&page->_refcount)
                                  XXX: try to modify r/o struct page[]

Specifically, page_is_fake_head() must be ordered after
page_ref_count() on CPU 2 so that it can only return true for this
case, to avoid the later attempt to modify r/o struct page[].

This patch adds the missing memory barrier and makes the tests on
page_is_fake_head() and page_ref_count() done in the proper order.

Fixes: bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers")
Reported-by: Will Deacon <will@xxxxxxxxxx>
Closes: https://lore.kernel.org/20241128142028.GA3506@willie-the-truck/
Signed-off-by: Yu Zhao <yuzhao@xxxxxxxxxx>
---
 include/linux/page-flags.h | 2 +-
 include/linux/page_ref.h   | 8 ++++++--
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 691506bdf2c5..6b8ecf86f1b6 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -212,7 +212,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
 	 * cold cacheline in some cases.
 	 */
 	if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
-	    test_bit(PG_head, &page->flags)) {
+	    test_bit_acquire(PG_head, &page->flags)) {
 		/*
 		 * We can safely access the field of the @page[1] with PG_head
 		 * because the @page is a compound page composed with at least
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 8c236c651d1d..5becea98bd79 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -233,8 +233,12 @@ static inline bool page_ref_add_unless(struct page *page, int nr, int u)
 	bool ret = false;
 
 	rcu_read_lock();
-	/* avoid writing to the vmemmap area being remapped */
-	if (!page_is_fake_head(page) && page_ref_count(page) != u)
+	/*
+	 * To avoid writing to the vmemmap area remapped into r/o in parallel,
+	 * the page_ref_count() test must precede the page_is_fake_head() test
+	 * so that test_bit_acquire() in the latter is ordered after the former.
+	 */
+	if (page_ref_count(page) != u && !page_is_fake_head(page))
 		ret = atomic_add_unless(&page->_refcount, nr, u);
 	rcu_read_unlock();
 
-- 
2.47.1.613.gc27f4b7a9f-goog





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux