The quilt patch titled Subject: mm: remove isolate_lru_page() has been removed from the -mm tree. Its filename was mm-remove-isolate_lru_page.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Subject: mm: remove isolate_lru_page() Date: Mon, 26 Aug 2024 14:58:13 +0800 There are no more callers of isolate_lru_page(), remove it. [wangkefeng.wang@xxxxxxxxxx: convert page to folio in comment and document, per Matthew] Link: https://lkml.kernel.org/r/20240826144114.1928071-1-wangkefeng.wang@xxxxxxxxxx Link: https://lkml.kernel.org/r/20240826065814.1336616-6-wangkefeng.wang@xxxxxxxxxx Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Cc: Alistair Popple <apopple@xxxxxxxxxx> Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Jonathan Corbet <corbet@xxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Zi Yan <ziy@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- Documentation/mm/page_migration.rst | 22 +++++----- Documentation/mm/unevictable-lru.rst | 4 - Documentation/translations/zh_CN/mm/page_migration.rst | 6 +- mm/filemap.c | 2 mm/folio-compat.c | 7 --- mm/internal.h | 1 mm/khugepaged.c | 8 +-- mm/migrate_device.c | 4 - mm/swap.c | 4 - 9 files changed, 25 insertions(+), 33 deletions(-) --- a/Documentation/mm/page_migration.rst~mm-remove-isolate_lru_page +++ a/Documentation/mm/page_migration.rst @@ -63,15 +63,15 @@ and then a low level description of how In kernel use of migrate_pages() ================================ -1. Remove pages from the LRU. +1. Remove folios from the LRU. - Lists of pages to be migrated are generated by scanning over - pages and moving them into lists. This is done by - calling isolate_lru_page(). - Calling isolate_lru_page() increases the references to the page - so that it cannot vanish while the page migration occurs. + Lists of folios to be migrated are generated by scanning over + folios and moving them into lists. This is done by + calling folio_isolate_lru(). + Calling folio_isolate_lru() increases the references to the folio + so that it cannot vanish while the folio migration occurs. It also prevents the swapper or other scans from encountering - the page. + the folio. 2. We need to have a function of type new_folio_t that can be passed to migrate_pages(). This function should figure out @@ -84,10 +84,10 @@ In kernel use of migrate_pages() How migrate_pages() works ========================= -migrate_pages() does several passes over its list of pages. A page is moved -if all references to a page are removable at the time. The page has -already been removed from the LRU via isolate_lru_page() and the refcount -is increased so that the page cannot be freed while page migration occurs. +migrate_pages() does several passes over its list of folios. A folio is moved +if all references to a folio are removable at the time. The folio has +already been removed from the LRU via folio_isolate_lru() and the refcount +is increased so that the folio cannot be freed while folio migration occurs. Steps: --- a/Documentation/mm/unevictable-lru.rst~mm-remove-isolate_lru_page +++ a/Documentation/mm/unevictable-lru.rst @@ -80,7 +80,7 @@ on an additional LRU list for a few reas (2) We want to be able to migrate unevictable folios between nodes for memory defragmentation, workload management and memory hotplug. The Linux kernel can only migrate folios that it can successfully isolate from the LRU - lists (or "Movable" pages: outside of consideration here). If we were to + lists (or "Movable" folios: outside of consideration here). If we were to maintain folios elsewhere than on an LRU-like list, where they can be detected by folio_isolate_lru(), we would prevent their migration. @@ -230,7 +230,7 @@ In Nick's patch, he used one of the stru of VM_LOCKED VMAs that map the page (Rik van Riel had the same idea three years earlier). But this use of the link field for a count prevented the management of the pages on an LRU list, and thus mlocked pages were not migratable as -isolate_lru_page() could not detect them, and the LRU list link field was not +folio_isolate_lru() could not detect them, and the LRU list link field was not available to the migration subsystem. Nick resolved this by putting mlocked pages back on the LRU list before --- a/Documentation/translations/zh_CN/mm/page_migration.rst~mm-remove-isolate_lru_page +++ a/Documentation/translations/zh_CN/mm/page_migration.rst @@ -50,8 +50,8 @@ å?¨å??æ ¸ä¸ä½¿ç?¨ migrate_pages() 1. ä»?LRUä¸ç§»é?¤é¡µé?¢ã?? - è¦?è¿?移ç??页é?¢å??表æ?¯é??è¿?æ?«æ??页é?¢å¹¶æ??å®?们移å?°å??表ä¸æ?¥ç??æ??ç??ã??è¿?æ?¯é??è¿?è°?ç?¨ isolate_lru_page() - æ?¥å®?æ??ç??ã??è°?ç?¨isolate_lru_page()å¢?å? äº?对该页ç??å¼?ç?¨ï¼?è¿?æ ·å?¨é¡µé?¢è¿?移å??ç??æ?¶å®?å°±ä¸?ä¼? + è¦?è¿?移ç??页é?¢å??表æ?¯é??è¿?æ?«æ??页é?¢å¹¶æ??å®?们移å?°å??表ä¸æ?¥ç??æ??ç??ã??è¿?æ?¯é??è¿?è°?ç?¨ folio_isolate_lru() + æ?¥å®?æ??ç??ã??è°?ç?¨folio_isolate_lru()å¢?å? äº?对该页ç??å¼?ç?¨ï¼?è¿?æ ·å?¨é¡µé?¢è¿?移å??ç??æ?¶å®?å°±ä¸?ä¼? æ¶?失ã??å®?è¿?å?¯ä»¥é?²æ¢äº¤æ?¢å?¨æ??å?¶ä»?æ?«æ??å?¨é??å?°è¯¥é¡µã?? @@ -65,7 +65,7 @@ migrate_pages()å¦?ä½?å·¥ä½? ======================= migrate_pages()对å®?ç??页é?¢å??表è¿?è¡?äº?å¤?次å¤?ç??ã??å¦?æ??å½?æ?¶å¯¹ä¸?个页é?¢ç??æ??æ??å¼?ç?¨é?½å?¯ä»¥è¢«ç§»é?¤ï¼? -é?£ä¹?è¿?个页é?¢å°±ä¼?被移å?¨ã??该页已ç»?é??è¿?isolate_lru_page()ä»?LRUä¸ç§»é?¤ï¼?并ä¸?refcount被 +é?£ä¹?è¿?个页é?¢å°±ä¼?被移å?¨ã??该页已ç»?é??è¿?folio_isolate_lru()ä»?LRUä¸ç§»é?¤ï¼?并ä¸?refcount被 å¢?å? ï¼?以便å?¨é¡µé?¢è¿?移å??ç??æ?¶ä¸?é??æ?¾è¯¥é¡µã?? æ¥éª¤: --- a/mm/filemap.c~mm-remove-isolate_lru_page +++ a/mm/filemap.c @@ -114,7 +114,7 @@ * ->private_lock (try_to_unmap_one) * ->i_pages lock (try_to_unmap_one) * ->lruvec->lru_lock (follow_page_mask->mark_page_accessed) - * ->lruvec->lru_lock (check_pte_range->isolate_lru_page) + * ->lruvec->lru_lock (check_pte_range->folio_isolate_lru) * ->private_lock (folio_remove_rmap_pte->set_page_dirty) * ->i_pages lock (folio_remove_rmap_pte->set_page_dirty) * bdi.wb->list_lock (folio_remove_rmap_pte->set_page_dirty) --- a/mm/folio-compat.c~mm-remove-isolate_lru_page +++ a/mm/folio-compat.c @@ -93,13 +93,6 @@ struct page *grab_cache_page_write_begin } EXPORT_SYMBOL(grab_cache_page_write_begin); -bool isolate_lru_page(struct page *page) -{ - if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page")) - return false; - return folio_isolate_lru((struct folio *)page); -} - void putback_lru_page(struct page *page) { folio_putback_lru(page_folio(page)); --- a/mm/internal.h~mm-remove-isolate_lru_page +++ a/mm/internal.h @@ -416,7 +416,6 @@ extern unsigned long highest_memmap_pfn; /* * in mm/vmscan.c: */ -bool isolate_lru_page(struct page *page); bool folio_isolate_lru(struct folio *folio); void putback_lru_page(struct page *page); void folio_putback_lru(struct folio *folio); --- a/mm/khugepaged.c~mm-remove-isolate_lru_page +++ a/mm/khugepaged.c @@ -627,8 +627,8 @@ static int __collapse_huge_page_isolate( } /* - * We can do it before isolate_lru_page because the - * page can't be freed from under us. NOTE: PG_lock + * We can do it before folio_isolate_lru because the + * folio can't be freed from under us. NOTE: PG_lock * is needed to serialize against split_huge_page * when invoked from the VM. */ @@ -1874,7 +1874,7 @@ static int collapse_file(struct mm_struc result = SCAN_FAIL; goto xa_unlocked; } - /* drain lru cache to help isolate_lru_page() */ + /* drain lru cache to help folio_isolate_lru() */ lru_add_drain(); } else if (folio_trylock(folio)) { folio_get(folio); @@ -1889,7 +1889,7 @@ static int collapse_file(struct mm_struc page_cache_sync_readahead(mapping, &file->f_ra, file, index, end - index); - /* drain lru cache to help isolate_lru_page() */ + /* drain lru cache to help folio_isolate_lru() */ lru_add_drain(); folio = filemap_lock_folio(mapping, index); if (IS_ERR(folio)) { --- a/mm/migrate_device.c~mm-remove-isolate_lru_page +++ a/mm/migrate_device.c @@ -328,8 +328,8 @@ static bool migrate_vma_check_page(struc /* * One extra ref because caller holds an extra reference, either from - * isolate_lru_page() for a regular page, or migrate_vma_collect() for - * a device page. + * folio_isolate_lru() for a regular folio, or migrate_vma_collect() for + * a device folio. */ int extra = 1 + (page == fault_page); --- a/mm/swap.c~mm-remove-isolate_lru_page +++ a/mm/swap.c @@ -906,8 +906,8 @@ atomic_t lru_disable_count = ATOMIC_INIT /* * lru_cache_disable() needs to be called before we start compiling - * a list of pages to be migrated using isolate_lru_page(). - * It drains pages on LRU cache and then disable on all cpus until + * a list of folios to be migrated using folio_isolate_lru(). + * It drains folios on LRU cache and then disable on all cpus until * lru_cache_enable is called. * * Must be paired with a call to lru_cache_enable(). _ Patches currently in -mm which might be from wangkefeng.wang@xxxxxxxxxx are mm-support-poison-recovery-from-do_cow_fault.patch mm-support-poison-recovery-from-copy_present_page.patch