> On Jul 26, 2024, at 02:39, David Hildenbrand <david@xxxxxxxxxx> wrote: > > We recently made GUP's common page table walking code to also walk > hugetlb VMAs without most hugetlb special-casing, preparing for the > future of having less hugetlb-specific page table walking code in the > codebase. Turns out that we missed one page table locking detail: page > table locking for hugetlb folios that are not mapped using a single > PMD/PUD. > > Assume we have hugetlb folio that spans multiple PTEs (e.g., 64 KiB > hugetlb folios on arm64 with 4 KiB base page size). GUP, as it walks the > page tables, will perform a pte_offset_map_lock() to grab the PTE table > lock. > > However, hugetlb that concurrently modifies these page tables would > actually grab the mm->page_table_lock: with USE_SPLIT_PTE_PTLOCKS, the > locks would differ. Something similar can happen right now with hugetlb > folios that span multiple PMDs when USE_SPLIT_PMD_PTLOCKS. > > Let's make huge_pte_lockptr() effectively uses the same PT locks as any > core-mm page table walker would. > > There is one ugly case: powerpc 8xx, whereby we have an 8 MiB hugetlb > folio being mapped using two PTE page tables. While hugetlb wants to take > the PMD table lock, core-mm would grab the PTE table lock of one of both > PTE page tables. In such corner cases, we have to make sure that both > locks match, which is (fortunately!) currently guaranteed for 8xx as it > does not support SMP. > > Fixes: 9cb28da54643 ("mm/gup: handle hugetlb in the generic follow_page_mask code") > Cc: <stable@xxxxxxxxxxxxxxx> > Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Acked-by: Muchun Song <muchun.song@xxxxxxxxx> Thanks.