The patch titled Subject: mm: shmem: shmem_writepage() split folio at EOF before swapout has been added to the -mm mm-unstable branch. Its filename is mm-shmem-support-large-folio-swap-out-fix.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-shmem-support-large-folio-swap-out-fix.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Hugh Dickins <hughd@xxxxxxxxxx> Subject: mm: shmem: shmem_writepage() split folio at EOF before swapout Date: Sun, 25 Aug 2024 16:14:17 -0700 (PDT) Working in a constrained (size= or nr_blocks=) huge=always tmpfs relies on swapout to split a large folio at EOF, to trim off its excess before hitting premature ENOSPC: shmem_unused_huge_shrink() contains no code to handle splitting huge swap blocks, and nobody would want that to be added. Link: https://lkml.kernel.org/r/aef55f8d-6040-692d-65e3-16150cce4440@xxxxxxxxxx Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Cc: Barry Song <baohua@xxxxxxxxxx> Cc: Chris Li <chrisl@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: "Huang, Ying" <ying.huang@xxxxxxxxx> Cc: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Cc: Lance Yang <ioworker0@xxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Pankaj Raghav <p.raghav@xxxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Cc: Zi Yan <ziy@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/shmem.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) --- a/mm/shmem.c~mm-shmem-support-large-folio-swap-out-fix +++ a/mm/shmem.c @@ -1459,6 +1459,7 @@ static int shmem_writepage(struct page * swp_entry_t swap; pgoff_t index; int nr_pages; + bool split = false; /* * Our capabilities prevent regular writeback or sync from ever calling @@ -1480,8 +1481,20 @@ static int shmem_writepage(struct page * * If /sys/kernel/mm/transparent_hugepage/shmem_enabled is "always" or * "force", drivers/gpu/drm/i915/gem/i915_gem_shmem.c gets huge pages, * and its shmem_writeback() needs them to be split when swapping. + * + * And shrinkage of pages beyond i_size does not split swap, so + * swapout of a large folio crossing i_size needs to split too + * (unless fallocate has been used to preallocate beyond EOF). */ - if (wbc->split_large_folio && folio_test_large(folio)) { + if (folio_test_large(folio)) { + split = wbc->split_large_folio; + index = shmem_fallocend(inode, + DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE)); + if (index > folio->index && index < folio_next_index(folio)) + split = true; + } + + if (split) { try_split: /* Ensure the subpages are still dirty */ folio_test_set_dirty(folio); _ Patches currently in -mm which might be from hughd@xxxxxxxxxx are mm-shmem-split-large-entry-if-the-swapin-folio-is-not-large-fix.patch mm-shmem-support-large-folio-swap-out-fix.patch mm-shmem-fix-minor-off-by-one-in-shrinkable-calculation.patch