Re: [PATCH mm-unstable v1] mm/hugetlb_vmemmap: fix memory loads ordering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07.01.25 05:35, Yu Zhao wrote:
Using x86_64 as an example, for a 32KB struct page[] area describing a
2MB hugeTLB, HVO reduces the area to 4KB by the following steps:
1. Split the (r/w vmemmap) PMD mapping the area into 512 (r/w) PTEs;
2. For the 8 PTEs mapping the area, remap PTE 1-7 to the page mapped
    by PTE 0, and at the same time change the permission from r/w to
    r/o;
3. Free the pages PTE 1-7 used to map, hence the reduction from 32KB
    to 4KB.

However, the following race can happen due to improperly memory loads
ordering:
   CPU 1 (HVO)                     CPU 2 (speculative PFN walker)

   page_ref_freeze()
   synchronize_rcu()
                                   rcu_read_lock()
                                   page_is_fake_head() is false
   vmemmap_remap_pte()
   XXX: struct page[] becomes r/o

   page_ref_unfreeze()
                                   page_ref_count() is not zero

                                   atomic_add_unless(&page->_refcount)
                                   XXX: try to modify r/o struct page[]

Specifically, page_is_fake_head() must be ordered after
page_ref_count() on CPU 2 so that it can only return true for this
case, to avoid the later attempt to modify r/o struct page[].

I *think* this is correct.


This patch adds the missing memory barrier and makes the tests on
page_is_fake_head() and page_ref_count() done in the proper order.

Fixes: bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers")
Reported-by: Will Deacon <will@xxxxxxxxxx>
Closes: https://lore.kernel.org/20241128142028.GA3506@willie-the-truck/
Signed-off-by: Yu Zhao <yuzhao@xxxxxxxxxx>
---
  include/linux/page-flags.h | 2 +-
  include/linux/page_ref.h   | 8 ++++++--
  2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 691506bdf2c5..6b8ecf86f1b6 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -212,7 +212,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
  	 * cold cacheline in some cases.
  	 */
  	if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
-	    test_bit(PG_head, &page->flags)) {
+	    test_bit_acquire(PG_head, &page->flags)) {

This change will affect all page_fixed_fake_head() users, like ordinary PageTail even on !hugetlb.

I assume you want an explicit memory barrier in the single problematic caller instead.

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux