Patch "Revert "mm: skip CMA pages when they are not available"" has been added to the 6.6-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    Revert "mm: skip CMA pages when they are not available"

to the 6.6-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     revert-mm-skip-cma-pages-when-they-are-not-available.patch
and it can be found in the queue-6.6 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit d6a1a1a9b688fbc8a97c1211626acab864e564cd
Author: Usama Arif <usamaarif642@xxxxxxxxx>
Date:   Wed Aug 21 20:26:07 2024 +0100

    Revert "mm: skip CMA pages when they are not available"
    
    [ Upstream commit bfe0857c20c663fcc1592fa4e3a61ca12b07dac9 ]
    
    This reverts commit 5da226dbfce3 ("mm: skip CMA pages when they are not
    available") and b7108d66318a ("Multi-gen LRU: skip CMA pages when they are
    not eligible").
    
    lruvec->lru_lock is highly contended and is held when calling
    isolate_lru_folios.  If the lru has a large number of CMA folios
    consecutively, while the allocation type requested is not MIGRATE_MOVABLE,
    isolate_lru_folios can hold the lock for a very long time while it skips
    those.  For FIO workload, ~150million order=0 folios were skipped to
    isolate a few ZONE_DMA folios [1].  This can cause lockups [1] and high
    memory pressure for extended periods of time [2].
    
    Remove skipping CMA for MGLRU as well, as it was introduced in sort_folio
    for the same resaon as 5da226dbfce3a2f44978c2c7cf88166e69a6788b.
    
    [1] https://lore.kernel.org/all/CAOUHufbkhMZYz20aM_3rHZ3OcK4m2puji2FGpUpn_-DevGk3Kg@xxxxxxxxxxxxxx/
    [2] https://lore.kernel.org/all/ZrssOrcJIDy8hacI@xxxxxxxxx/
    
    [usamaarif642@xxxxxxxxx: also revert b7108d66318a, per Johannes]
      Link: https://lkml.kernel.org/r/9060a32d-b2d7-48c0-8626-1db535653c54@xxxxxxxxx
      Link: https://lkml.kernel.org/r/357ac325-4c61-497a-92a3-bdbd230d5ec9@xxxxxxxxx
    Link: https://lkml.kernel.org/r/9060a32d-b2d7-48c0-8626-1db535653c54@xxxxxxxxx
    Fixes: 5da226dbfce3 ("mm: skip CMA pages when they are not available")
    Signed-off-by: Usama Arif <usamaarif642@xxxxxxxxx>
    Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>
    Cc: Bharata B Rao <bharata@xxxxxxx>
    Cc: Breno Leitao <leitao@xxxxxxxxxx>
    Cc: David Hildenbrand <david@xxxxxxxxxx>
    Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
    Cc: Rik van Riel <riel@xxxxxxxxxxx>
    Cc: Vlastimil Babka <vbabka@xxxxxxx>
    Cc: Yu Zhao <yuzhao@xxxxxxxxxx>
    Cc: Zhaoyang Huang <huangzhaoyang@xxxxxxxxx>
    Cc: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
    Cc: <stable@xxxxxxxxxxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7175ff9b97d9..81533bed0b46 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2261,25 +2261,6 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec,
 
 }
 
-#ifdef CONFIG_CMA
-/*
- * It is waste of effort to scan and reclaim CMA pages if it is not available
- * for current allocation context. Kswapd can not be enrolled as it can not
- * distinguish this scenario by using sc->gfp_mask = GFP_KERNEL
- */
-static bool skip_cma(struct folio *folio, struct scan_control *sc)
-{
-	return !current_is_kswapd() &&
-			gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE &&
-			folio_migratetype(folio) == MIGRATE_CMA;
-}
-#else
-static bool skip_cma(struct folio *folio, struct scan_control *sc)
-{
-	return false;
-}
-#endif
-
 /*
  * Isolating page from the lruvec to fill in @dst list by nr_to_scan times.
  *
@@ -2326,8 +2307,7 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan,
 		nr_pages = folio_nr_pages(folio);
 		total_scan += nr_pages;
 
-		if (folio_zonenum(folio) > sc->reclaim_idx ||
-				skip_cma(folio, sc)) {
+		if (folio_zonenum(folio) > sc->reclaim_idx) {
 			nr_skipped[folio_zonenum(folio)] += nr_pages;
 			move_to = &folios_skipped;
 			goto move;
@@ -4971,7 +4951,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
 	}
 
 	/* ineligible */
-	if (zone > sc->reclaim_idx || skip_cma(folio, sc)) {
+	if (zone > sc->reclaim_idx) {
 		gen = folio_inc_gen(lruvec, folio, false);
 		list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]);
 		return true;




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux