The quilt patch titled Subject: mm: convert hugetlb_page_mapping_lock_write to folio has been removed from the -mm tree. Its filename was mm-convert-hugetlb_page_mapping_lock_write-to-folio.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Subject: mm: convert hugetlb_page_mapping_lock_write to folio Date: Fri, 12 Apr 2024 20:35:03 +0100 The page is only used to get the mapping, so the folio will do just as well. Both callers already have a folio available, so this saves a call to compound_head(). Link: https://lkml.kernel.org/r/20240412193510.2356957-7-willy@xxxxxxxxxxxxx Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Reviewed-by: Jane Chu <jane.chu@xxxxxxxxxx> Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> Acked-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/hugetlb.h | 6 +++--- mm/hugetlb.c | 6 +++--- mm/memory-failure.c | 2 +- mm/migrate.c | 2 +- 4 files changed, 8 insertions(+), 8 deletions(-) --- a/include/linux/hugetlb.h~mm-convert-hugetlb_page_mapping_lock_write-to-folio +++ a/include/linux/hugetlb.h @@ -178,7 +178,7 @@ bool hugetlbfs_pagecache_present(struct struct vm_area_struct *vma, unsigned long address); -struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage); +struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio); extern int sysctl_hugetlb_shm_group; extern struct list_head huge_boot_pages[MAX_NUMNODES]; @@ -297,8 +297,8 @@ static inline unsigned long hugetlb_tota return 0; } -static inline struct address_space *hugetlb_page_mapping_lock_write( - struct page *hpage) +static inline struct address_space *hugetlb_folio_mapping_lock_write( + struct folio *folio) { return NULL; } --- a/mm/hugetlb.c~mm-convert-hugetlb_page_mapping_lock_write-to-folio +++ a/mm/hugetlb.c @@ -2155,13 +2155,13 @@ static bool prep_compound_gigantic_folio /* * Find and lock address space (mapping) in write mode. * - * Upon entry, the page is locked which means that page_mapping() is + * Upon entry, the folio is locked which means that folio_mapping() is * stable. Due to locking order, we can only trylock_write. If we can * not get the lock, simply return NULL to caller. */ -struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage) +struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio) { - struct address_space *mapping = page_mapping(hpage); + struct address_space *mapping = folio_mapping(folio); if (!mapping) return mapping; --- a/mm/memory-failure.c~mm-convert-hugetlb_page_mapping_lock_write-to-folio +++ a/mm/memory-failure.c @@ -1624,7 +1624,7 @@ static bool hwpoison_user_mappings(struc * TTU_RMAP_LOCKED to indicate we have taken the lock * at this higher level. */ - mapping = hugetlb_page_mapping_lock_write(hpage); + mapping = hugetlb_folio_mapping_lock_write(folio); if (mapping) { try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); i_mmap_unlock_write(mapping); --- a/mm/migrate.c~mm-convert-hugetlb_page_mapping_lock_write-to-folio +++ a/mm/migrate.c @@ -1425,7 +1425,7 @@ static int unmap_and_move_huge_page(new_ * semaphore in write mode here and set TTU_RMAP_LOCKED * to let lower levels know we have taken the lock. */ - mapping = hugetlb_page_mapping_lock_write(&src->page); + mapping = hugetlb_folio_mapping_lock_write(src); if (unlikely(!mapping)) goto unlock_put_anon; _ Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are squashfs-convert-squashfs_symlink_read_folio-to-use-folio-apis.patch squashfs-remove-calls-to-set-the-folio-error-flag.patch nilfs2-remove-calls-to-folio_set_error-and-folio_clear_error.patch