Re: [PATCH v3] mm: memcg: Use larger batches for proactive reclaim

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri 02-02-24 23:38:54, T.J. Mercier wrote:
> Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive
> reclaim") we passed the number of pages for the reclaim request directly
> to try_to_free_mem_cgroup_pages, which could lead to significant
> overreclaim. After 0388536ac291 the number of pages was limited to a
> maximum 32 (SWAP_CLUSTER_MAX) to reduce the amount of overreclaim.
> However such a small batch size caused a regression in reclaim
> performance due to many more reclaim start/stop cycles inside
> memory_reclaim.

You have mentioned that in one of the previous emails but it is good to
mention what is the source of that overhead for the future reference.
 
> Reclaim tries to balance nr_to_reclaim fidelity with fairness across
> nodes and cgroups over which the pages are spread. As such, the bigger
> the request, the bigger the absolute overreclaim error. Historic
> in-kernel users of reclaim have used fixed, small sized requests to
> approach an appropriate reclaim rate over time. When we reclaim a user
> request of arbitrary size, use decaying batch sizes to manage error while
> maintaining reasonable throughput.

These numbers are with MGLRU or the default reclaim implementation?
 
> root - full reclaim       pages/sec   time (sec)
> pre-0388536ac291      :    68047        10.46
> post-0388536ac291     :    13742        inf
> (reclaim-reclaimed)/4 :    67352        10.51
> 
> /uid_0 - 1G reclaim       pages/sec   time (sec)  overreclaim (MiB)
> pre-0388536ac291      :    258822       1.12            107.8
> post-0388536ac291     :    105174       2.49            3.5
> (reclaim-reclaimed)/4 :    233396       1.12            -7.4
> 
> /uid_0 - full reclaim     pages/sec   time (sec)
> pre-0388536ac291      :    72334        7.09
> post-0388536ac291     :    38105        14.45
> (reclaim-reclaimed)/4 :    72914        6.96
> 
> Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim")
> Signed-off-by: T.J. Mercier <tjmercier@xxxxxxxxxx>
> Reviewed-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>
> 
> ---
> v3: Formatting fixes per Yosry Ahmed and Johannes Weiner. No functional
> changes.
> v2: Simplify the request size calculation per Johannes Weiner and Michal Koutný
> 
>  mm/memcontrol.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 46d8d02114cf..f6ab61128869 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6976,9 +6976,11 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
>  		if (!nr_retries)
>  			lru_add_drain_all();
>  
> +		/* Will converge on zero, but reclaim enforces a minimum */
> +		unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4;

This doesn't fit into the existing coding style. I do not think there is
a strong reason to go against it here.

> +
>  		reclaimed = try_to_free_mem_cgroup_pages(memcg,
> -					min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX),
> -					GFP_KERNEL, reclaim_options);
> +					batch_size, GFP_KERNEL, reclaim_options);

Also with the increased reclaim target do we need something like this?

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4f9c854ce6cc..94794cf5ee9f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1889,7 +1889,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan,
 
 		/* We are about to die and free our memory. Return now. */
 		if (fatal_signal_pending(current))
-			return SWAP_CLUSTER_MAX;
+			return sc->nr_to_reclaim;
 	}
 
 	lru_add_drain();
>  
>  		if (!reclaimed && !nr_retries--)
>  			return -EAGAIN;
> -- 
> 2.43.0.594.gd9cf4e227d-goog

-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux