The patch titled Subject: mm: shmem: support large folio swap out fix 2 has been added to the -mm mm-unstable branch. Its filename is mm-shmem-support-large-folio-swap-out-fix-2.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-shmem-support-large-folio-swap-out-fix-2.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Subject: mm: shmem: support large folio swap out fix 2 Date: Wed, 28 Aug 2024 16:28:38 +0800 As Hugh said: " The i915 THP splitting inshmem_writepage() was to avoid mm VM_BUG_ONs and crashes when shmem.c did not support huge page swapout: but now you are enabling that support, and such VM_BUG_ONs and crashes are gone (so far as I can see: and this is written on a laptop using the i915 driver). I cannot think of why i915 itself would care how mm implements swapout (beyond enjoying faster): I think all the wbc->split_large_folio you introduce here should be reverted. " So this fixup patch will remove the wbc->split_large_folio as suggested by Hugh. Link: https://lkml.kernel.org/r/1236a002daa301b3b9ba73d6c0fab348427cf295.1724833399.git.baolin.wang@xxxxxxxxxxxxxxxxx Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Cc: Barry Song <baohua@xxxxxxxxxx> Cc: Chris Li <chrisl@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: "Huang, Ying" <ying.huang@xxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Cc: Lance Yang <ioworker0@xxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Pankaj Raghav <p.raghav@xxxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Cc: Zi Yan <ziy@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 1 - include/linux/writeback.h | 1 - mm/shmem.c | 9 ++++----- mm/vmscan.c | 4 +--- 4 files changed, 5 insertions(+), 10 deletions(-) --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c~mm-shmem-support-large-folio-swap-out-fix-2 +++ a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -308,7 +308,6 @@ void __shmem_writeback(size_t size, stru .range_start = 0, .range_end = LLONG_MAX, .for_reclaim = 1, - .split_large_folio = 1, }; unsigned long i; --- a/include/linux/writeback.h~mm-shmem-support-large-folio-swap-out-fix-2 +++ a/include/linux/writeback.h @@ -63,7 +63,6 @@ struct writeback_control { unsigned range_cyclic:1; /* range_start is cyclic */ unsigned for_sync:1; /* sync(2) WB_SYNC_ALL writeback */ unsigned unpinned_netfs_wb:1; /* Cleared I_PINNING_NETFS_WB */ - unsigned split_large_folio:1; /* Split large folio for shmem writeback */ /* * When writeback IOs are bounced through async layers, only the --- a/mm/shmem.c~mm-shmem-support-large-folio-swap-out-fix-2 +++ a/mm/shmem.c @@ -1478,19 +1478,18 @@ static int shmem_writepage(struct page * goto redirty; /* - * If /sys/kernel/mm/transparent_hugepage/shmem_enabled is "always" or - * "force", drivers/gpu/drm/i915/gem/i915_gem_shmem.c gets huge pages, - * and its shmem_writeback() needs them to be split when swapping. + * If CONFIG_THP_SWAP is not enabled, the large folio should be + * split when swapping. * * And shrinkage of pages beyond i_size does not split swap, so * swapout of a large folio crossing i_size needs to split too * (unless fallocate has been used to preallocate beyond EOF). */ if (folio_test_large(folio)) { - split = wbc->split_large_folio; index = shmem_fallocend(inode, DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE)); - if (index > folio->index && index < folio_next_index(folio)) + if ((index > folio->index && index < folio_next_index(folio)) || + !IS_ENABLED(CONFIG_THP_SWAP)) split = true; } --- a/mm/vmscan.c~mm-shmem-support-large-folio-swap-out-fix-2 +++ a/mm/vmscan.c @@ -681,10 +681,8 @@ static pageout_t pageout(struct folio *f * not enabled or contiguous swap entries are failed to * allocate. */ - if (shmem_mapping(mapping) && folio_test_large(folio)) { + if (shmem_mapping(mapping) && folio_test_large(folio)) wbc.list = folio_list; - wbc.split_large_folio = !IS_ENABLED(CONFIG_THP_SWAP); - } folio_set_reclaim(folio); res = mapping->a_ops->writepage(&folio->page, &wbc); _ Patches currently in -mm which might be from baolin.wang@xxxxxxxxxxxxxxxxx are mm-shmem-simplify-the-suitable-huge-orders-validation-for-tmpfs.patch mm-shmem-rename-shmem_is_huge-to-shmem_huge_global_enabled.patch mm-shmem-move-shmem_huge_global_enabled-into-shmem_allowable_huge_orders.patch mm-swap-extend-swap_shmem_alloc-to-support-batch-swap_map_shmem-flag-setting.patch mm-shmem-extend-shmem_partial_swap_usage-to-support-large-folio-swap.patch mm-filemap-use-xa_get_order-to-get-the-swap-entry-order.patch mm-shmem-use-swap_free_nr-to-free-shmem-swap-entries.patch mm-shmem-support-large-folio-allocation-for-shmem_replace_folio.patch mm-shmem-support-large-folio-allocation-for-shmem_replace_folio-fix.patch mm-shmem-support-large-folio-allocation-for-shmem_replace_folio-fix-fix.patch mm-shmem-drop-folio-reference-count-using-nr_pages-in-shmem_delete_from_page_cache.patch mm-shmem-split-large-entry-if-the-swapin-folio-is-not-large.patch mm-shmem-split-large-entry-if-the-swapin-folio-is-not-large-fix-2.patch mm-shmem-support-large-folio-swap-out.patch mm-shmem-support-large-folio-swap-out-fix-2.patch mm-khugepaged-expand-the-is_refcount_suitable-to-support-file-folios.patch mm-khugepaged-use-the-number-of-pages-in-the-folio-to-check-the-reference-count.patch mm-khugepaged-support-shmem-mthp-copy.patch mm-khugepaged-support-shmem-mthp-collapse.patch selftests-mm-support-shmem-mthp-collapse-testing.patch