The quilt patch titled Subject: mm-shmem-split-large-entry-if-the-swapin-folio-is-not-large-fix-2 has been removed from the -mm tree. Its filename was mm-shmem-split-large-entry-if-the-swapin-folio-is-not-large-fix-2.patch This patch was dropped because it was folded into mm-shmem-split-large-entry-if-the-swapin-folio-is-not-large.patch ------------------------------------------------------ From: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Subject: mm-shmem-split-large-entry-if-the-swapin-folio-is-not-large-fix-2 Date: Tue, 27 Aug 2024 14:46:40 +0800 Now we only split large folio to order 0, so drop the 'new_order' parameter. Link: https://lkml.kernel.org/r/39c71ccf-669b-4d9f-923c-f6b9c4ceb8df@xxxxxxxxxxxxxxxxx Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Cc: Barry Song <baohua@xxxxxxxxxx> Cc: Chris Li <chrisl@xxxxxxxxxx> Cc: Daniel Gomez <da.gomez@xxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: "Huang, Ying" <ying.huang@xxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Cc: Lance Yang <ioworker0@xxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Pankaj Raghav <p.raghav@xxxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Cc: Zi Yan <ziy@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/shmem.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) --- a/mm/shmem.c~mm-shmem-split-large-entry-if-the-swapin-folio-is-not-large-fix-2 +++ a/mm/shmem.c @@ -1996,10 +1996,10 @@ static void shmem_set_folio_swapin_error } static int shmem_split_large_entry(struct inode *inode, pgoff_t index, - swp_entry_t swap, int new_order, gfp_t gfp) + swp_entry_t swap, gfp_t gfp) { struct address_space *mapping = inode->i_mapping; - XA_STATE_ORDER(xas, &mapping->i_pages, index, new_order); + XA_STATE_ORDER(xas, &mapping->i_pages, index, 0); void *alloced_shadow = NULL; int alloced_order = 0, i; @@ -2027,7 +2027,7 @@ static int shmem_split_large_entry(struc } /* Try to split large swap entry in pagecache */ - if (order > 0 && order > new_order) { + if (order > 0) { if (!alloced_order) { split_order = order; goto unlock; @@ -2038,7 +2038,7 @@ static int shmem_split_large_entry(struc * Re-set the swap entry after splitting, and the swap * offset of the original large entry must be continuous. */ - for (i = 0; i < 1 << order; i += (1 << new_order)) { + for (i = 0; i < 1 << order; i++) { pgoff_t aligned_index = round_down(index, 1 << order); swp_entry_t tmp; @@ -2124,7 +2124,7 @@ static int shmem_swapin_folio(struct ino * should split the large swap entry stored in the pagecache * if necessary. */ - split_order = shmem_split_large_entry(inode, index, swap, 0, gfp); + split_order = shmem_split_large_entry(inode, index, swap, gfp); if (split_order < 0) { error = split_order; goto failed; _ Patches currently in -mm which might be from baolin.wang@xxxxxxxxxxxxxxxxx are mm-swap-extend-swap_shmem_alloc-to-support-batch-swap_map_shmem-flag-setting.patch mm-shmem-extend-shmem_partial_swap_usage-to-support-large-folio-swap.patch mm-filemap-use-xa_get_order-to-get-the-swap-entry-order.patch mm-shmem-use-swap_free_nr-to-free-shmem-swap-entries.patch mm-shmem-support-large-folio-allocation-for-shmem_replace_folio.patch mm-shmem-drop-folio-reference-count-using-nr_pages-in-shmem_delete_from_page_cache.patch mm-shmem-split-large-entry-if-the-swapin-folio-is-not-large.patch mm-shmem-support-large-folio-swap-out.patch mm-shmem-support-large-folio-swap-out-fix-2.patch mm-khugepaged-expand-the-is_refcount_suitable-to-support-file-folios.patch mm-khugepaged-use-the-number-of-pages-in-the-folio-to-check-the-reference-count.patch mm-khugepaged-support-shmem-mthp-copy.patch mm-khugepaged-support-shmem-mthp-collapse.patch selftests-mm-support-shmem-mthp-collapse-testing.patch