The quilt patch titled Subject: mm/compaction: optimize >0 order folio compaction with free page split. has been removed from the -mm tree. Its filename was mm-compaction-optimize-0-order-folio-compaction-with-free-page-split.patch This patch was dropped because an updated version will be merged ------------------------------------------------------ From: Zi Yan <ziy@xxxxxxxxxx> Subject: mm/compaction: optimize >0 order folio compaction with free page split. Date: Fri, 2 Feb 2024 11:15:54 -0500 During migration in a memory compaction, free pages are placed in an array of page lists based on their order. But the desired free page order (i.e., the order of a source page) might not be always present, thus leading to migration failures and premature compaction termination. Split a high order free pages when source migration page has a lower order to increase migration successful rate. Note: merging free pages when a migration fails and a lower order free page is returned via compaction_free() is possible, but there is too much work. Since the free pages are not buddy pages, it is hard to identify these free pages using existing PFN-based page merging algorithm. Link: https://lkml.kernel.org/r/20240202161554.565023-4-zi.yan@xxxxxxxx Signed-off-by: Zi Yan <ziy@xxxxxxxxxx> Reviewed-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Tested-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Cc: Adam Manzanares <a.manzanares@xxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Huang Ying <ying.huang@xxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Kemeng Shi <shikemeng@xxxxxxxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Luis Chamberlain <mcgrof@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Yin Fengwei <fengwei.yin@xxxxxxxxx> Cc: Yu Zhao <yuzhao@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/compaction.c | 37 ++++++++++++++++++++++++++++++++++++- 1 file changed, 36 insertions(+), 1 deletion(-) --- a/mm/compaction.c~mm-compaction-optimize-0-order-folio-compaction-with-free-page-split +++ a/mm/compaction.c @@ -1832,9 +1832,43 @@ static struct folio *compaction_alloc(st struct compact_control *cc = (struct compact_control *)data; struct folio *dst; int order = folio_order(src); + bool has_isolated_pages = false; +again: if (!cc->freepages[order].nr_pages) { - isolate_freepages(cc); + int i; + + for (i = order + 1; i < NR_PAGE_ORDERS; i++) { + if (cc->freepages[i].nr_pages) { + struct page *freepage = + list_first_entry(&cc->freepages[i].pages, + struct page, lru); + + int start_order = i; + unsigned long size = 1 << start_order; + + list_del(&freepage->lru); + cc->freepages[i].nr_pages--; + + while (start_order > order) { + start_order--; + size >>= 1; + + list_add(&freepage[size].lru, + &cc->freepages[start_order].pages); + cc->freepages[start_order].nr_pages++; + set_page_private(&freepage[size], start_order); + } + dst = (struct folio *)freepage; + goto done; + } + } + if (!has_isolated_pages) { + isolate_freepages(cc); + has_isolated_pages = true; + goto again; + } + if (!cc->freepages[order].nr_pages) return NULL; } @@ -1842,6 +1876,7 @@ static struct folio *compaction_alloc(st dst = list_first_entry(&cc->freepages[order].pages, struct folio, lru); cc->freepages[order].nr_pages--; list_del(&dst->lru); +done: post_alloc_hook(&dst->page, order, __GFP_MOVABLE); if (order) prep_compound_page(&dst->page, order); _ Patches currently in -mm which might be from ziy@xxxxxxxxxx are