The patch titled Subject: userfaultfd: replace lru_cache functions with folio_add functions has been added to the -mm mm-unstable branch. Its filename is userfualtfd-replace-lru_cache-functions-with-folio_add-functions.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/userfualtfd-replace-lru_cache-functions-with-folio_add-functions.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: "Vishal Moola (Oracle)" <vishal.moola@xxxxxxxxx> Subject: userfaultfd: replace lru_cache functions with folio_add functions Date: Tue, 1 Nov 2022 10:53:24 -0700 Replaces lru_cache_add() and lru_cache_add_inactive_or_unevictable() with folio_add_lru() and folio_add_lru_vma(). This is in preparation for the removal of lru_cache_add(). Link: https://lkml.kernel.org/r/20221101175326.13265-4-vishal.moola@xxxxxxxxx Signed-off-by: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Miklos Szeredi <mszeredi@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/userfaultfd.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) --- a/mm/userfaultfd.c~userfualtfd-replace-lru_cache-functions-with-folio_add-functions +++ a/mm/userfaultfd.c @@ -66,6 +66,7 @@ int mfill_atomic_install_pte(struct mm_s bool vm_shared = dst_vma->vm_flags & VM_SHARED; bool page_in_cache = page_mapping(page); spinlock_t *ptl; + struct folio *folio; struct inode *inode; pgoff_t offset, max_off; @@ -113,14 +114,15 @@ int mfill_atomic_install_pte(struct mm_s if (!pte_none_mostly(*dst_pte)) goto out_unlock; + folio = page_folio(page); if (page_in_cache) { /* Usually, cache pages are already added to LRU */ if (newly_allocated) - lru_cache_add(page); + folio_add_lru(folio); page_add_file_rmap(page, dst_vma, false); } else { page_add_new_anon_rmap(page, dst_vma, dst_addr); - lru_cache_add_inactive_or_unevictable(page, dst_vma); + folio_add_lru_vma(folio, dst_vma); } /* _ Patches currently in -mm which might be from vishal.moola@xxxxxxxxx are ext4-convert-move_extent_per_page-to-use-folios.patch khugepage-replace-try_to_release_page-with-filemap_release_folio.patch memory-failure-convert-truncate_error_page-to-use-folio.patch folio-compat-remove-try_to_release_page.patch filemap-convert-replace_page_cache_page-to-replace_page_cache_folio.patch fuse-convert-fuse_try_move_page-to-use-folios.patch userfualtfd-replace-lru_cache-functions-with-folio_add-functions.patch khugepage-replace-lru_cache_add-with-folio_add_lru.patch folio-compat-remove-lru_cache_add.patch