[folded-merged] mm-memcg-use-larger-batches-for-proactive-reclaim-v4.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The quilt patch titled
     Subject: mm-memcg-use-larger-batches-for-proactive-reclaim-v4
has been removed from the -mm tree.  Its filename was
     mm-memcg-use-larger-batches-for-proactive-reclaim-v4.patch

This patch was dropped because it was folded into mm-memcg-use-larger-batches-for-proactive-reclaim.patch

------------------------------------------------------
From: "T.J. Mercier" <tjmercier@xxxxxxxxxx>
Subject: mm-memcg-use-larger-batches-for-proactive-reclaim-v4
Date: Tue, 6 Feb 2024 17:52:50 +0000

Add additional info to commit message and move definition of batch_size
per Michal Hocko.  No functional changes.

Link: https://lkml.kernel.org/r/20240206175251.3364296-1-tjmercier@xxxxxxxxxx
Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim")
Signed-off-by: T.J. Mercier <tjmercier@xxxxxxxxxx>
Reviewed-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>
Acked-by: Shakeel Butt <shakeelb@xxxxxxxxxx>
Reviewed-by: Michal Koutny <mkoutny@xxxxxxxx>
Cc: Efly Young <yangyifei03@xxxxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx>
Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx>
Cc: Yu Zhao <yuzhao@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/memcontrol.c |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

--- a/mm/memcontrol.c~mm-memcg-use-larger-batches-for-proactive-reclaim-v4
+++ a/mm/memcontrol.c
@@ -6981,6 +6981,8 @@ static ssize_t memory_reclaim(struct ker
 
 	reclaim_options	= MEMCG_RECLAIM_MAY_SWAP | MEMCG_RECLAIM_PROACTIVE;
 	while (nr_reclaimed < nr_to_reclaim) {
+		/* Will converge on zero, but reclaim enforces a minimum */
+		unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4;
 		unsigned long reclaimed;
 
 		if (signal_pending(current))
@@ -6994,9 +6996,6 @@ static ssize_t memory_reclaim(struct ker
 		if (!nr_retries)
 			lru_add_drain_all();
 
-		/* Will converge on zero, but reclaim enforces a minimum */
-		unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4;
-
 		reclaimed = try_to_free_mem_cgroup_pages(memcg,
 					batch_size, GFP_KERNEL, reclaim_options);
 
_

Patches currently in -mm which might be from tjmercier@xxxxxxxxxx are

mm-memcg-dont-periodically-flush-stats-when-memcg-is-disabled.patch
mm-memcg-use-larger-batches-for-proactive-reclaim.patch





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux