The patch titled Subject: FIXUP: mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() has been added to the -mm mm-unstable branch. Its filename is mm-swap-free_swap_and_cache_nr-as-batched-free_swap_and_cache-fix.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-swap-free_swap_and_cache_nr-as-batched-free_swap_and_cache-fix.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Ryan Roberts <ryan.roberts@xxxxxxx> Subject: FIXUP: mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() Date: Tue, 9 Apr 2024 12:18:40 +0100 Fix a build warning on parisc [1] due to their implementation of __swp_entry_to_pte() not correctly putting the macro args in parenthisis. But it turns out that a bunch of other arches are also faulty in this regard. I'm also adding an extra statement to the documentation for pte_next_swp_offset() as suggested by David. [1] https://lore.kernel.org/all/202404091749.ScNPJ8j4-lkp@xxxxxxxxx/ Link: https://lkml.kernel.org/r/20240409111840.3173122-1-ryan.roberts@xxxxxxx Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Barry Song <21cnbao@xxxxxxxxx> Cc: Barry Song <v-songbaohua@xxxxxxxx> Cc: Chris Li <chrisl@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Gao Xiang <xiang@xxxxxxxxxx> Cc: "Huang, Ying" <ying.huang@xxxxxxxxx> Cc: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Cc: Lance Yang <ioworker0@xxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Cc: Yu Zhao <yuzhao@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/internal.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) --- a/mm/internal.h~mm-swap-free_swap_and_cache_nr-as-batched-free_swap_and_cache-fix +++ a/mm/internal.h @@ -194,7 +194,8 @@ static inline int folio_pte_batch(struct /** * pte_next_swp_offset - Increment the swap entry offset field of a swap pte. - * @pte: The initial pte state; is_swap_pte(pte) must be true. + * @pte: The initial pte state; is_swap_pte(pte) must be true and + * non_swap_entry() must be false. * * Increments the swap offset, while maintaining all other fields, including * swap type, and any swp pte bits. The resulting pte is returned. @@ -203,7 +204,7 @@ static inline pte_t pte_next_swp_offset( { swp_entry_t entry = pte_to_swp_entry(pte); pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry), - swp_offset(entry) + 1)); + (swp_offset(entry) + 1))); if (pte_swp_soft_dirty(pte)) new = pte_swp_mksoft_dirty(new); _ Patches currently in -mm which might be from ryan.roberts@xxxxxxx are mm-swap-remove-cluster_flag_huge-from-swap_cluster_info-flags.patch mm-swap-free_swap_and_cache_nr-as-batched-free_swap_and_cache.patch mm-swap-free_swap_and_cache_nr-as-batched-free_swap_and_cache-fix.patch mm-swap-simplify-struct-percpu_cluster.patch mm-swap-update-get_swap_pages-to-take-folio-order.patch mm-swap-allow-storage-of-all-mthp-orders.patch mm-vmscan-avoid-split-during-shrink_folio_list.patch mm-madvise-avoid-split-during-madv_pageout-and-madv_cold.patch