On 08/16/23 16:11, Matthew Wilcox (Oracle) wrote: > Store the folio order in the low byte of the flags word in the first > tail page. This frees up the word that was being used to store the > order and dtor bytes previously. hugetlb manually creates and destroys compound pages. As such it makes assumptions about struct page layout. This breaks hugetlb. The following will allow fix the breakage. The hugetlb code is quite fragile when changes like this are made. I am open to suggestions on how we can make this more robust. Perhaps start with a simple set of APIs to create_folio from a set of contiguous pages and destroy a folio? -- Mike Kravetz >From 8d8aa4486a4119f6d694b423b2f68161b4e7432c Mon Sep 17 00:00:00 2001 From: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Date: Tue, 22 Aug 2023 15:30:43 -0700 Subject: [PATCH] hugetlb: clear flags in tail pages that will be freed individually Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> --- mm/hugetlb.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a82c3104337e..cbc25826c9b0 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1484,6 +1484,7 @@ static void __destroy_compound_gigantic_folio(struct folio *folio, for (i = 1; i < nr_pages; i++) { p = folio_page(folio, i); + p->flags &= ~PAGE_FLAGS_CHECK_AT_FREE; p->mapping = NULL; clear_compound_head(p); if (!demote) @@ -1702,8 +1703,6 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, static void __update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio) { - int i; - struct page *subpage; bool clear_dtor = folio_test_hugetlb_vmemmap_optimized(folio); if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) @@ -1745,14 +1744,6 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, spin_unlock_irq(&hugetlb_lock); } - for (i = 0; i < pages_per_huge_page(h); i++) { - subpage = folio_page(folio, i); - subpage->flags &= ~(1 << PG_locked | 1 << PG_error | - 1 << PG_referenced | 1 << PG_dirty | - 1 << PG_active | 1 << PG_private | - 1 << PG_writeback); - } - /* * Non-gigantic pages demoted from CMA allocated gigantic pages * need to be given back to CMA in free_gigantic_folio. -- 2.41.0