Re: [RFC PATCH] mm/vmscan: Don't round up scan size for online memory cgroup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Gavin!

On Mon, Feb 10, 2020 at 11:14:45PM +1100, Gavin Shan wrote:
> commit 68600f623d69 ("mm: don't miss the last page because of round-off
> error") makes the scan size round up to @denominator regardless of the
> memory cgroup's state, online or offline. This affects the overall
> reclaiming behavior: The corresponding LRU list is eligible for reclaiming
> only when its size logically right shifted by @sc->priority is bigger than
> zero in the former formula (non-roundup one).

Not sure I fully understand, but wasn't it so before 68600f623d69 too?

> For example, the inactive
> anonymous LRU list should have at least 0x4000 pages to be eligible for
> reclaiming when we have 60/12 for swappiness/priority and without taking
> scan/rotation ratio into account. After the roundup is applied, the
> inactive anonymous LRU list becomes eligible for reclaiming when its
> size is bigger than or equal to 0x1000 in the same condition.
> 
>     (0x4000 >> 12) * 60 / (60 + 140 + 1) = 1
>     ((0x1000 >> 12) * 60) + 200) / (60 + 140 + 1) = 1
> 
> aarch64 has 512MB huge page size when the base page size is 64KB. The
> memory cgroup that has a huge page is always eligible for reclaiming in
> that case. The reclaiming is likely to stop after the huge page is
> reclaimed, meaing the subsequent @sc->priority and memory cgroups will be
> skipped. It changes the overall reclaiming behavior. This fixes the issue
> by applying the roundup to offlined memory cgroups only, to give more
> preference to reclaim memory from offlined memory cgroup. It sounds
> reasonable as those memory is likely to be useless.

So is the problem that relatively small memory cgroups are getting reclaimed
on default prio, however before they were skipped?

> 
> The issue was found by starting up 8 VMs on a Ampere Mustang machine,
> which has 8 CPUs and 16 GB memory. Each VM is given with 2 vCPUs and 2GB
> memory. 784MB swap space is consumed after these 8 VMs are completely up.
> Note that KSM is disable while THP is enabled in the testing. With this
> applied, the consumed swap space decreased to 60MB.
> 
>          total        used        free      shared  buff/cache   available
> Mem:     16196       10065        2049          16        4081        3749
> Swap:     8175         784        7391
>          total        used        free      shared  buff/cache   available
> Mem:     16196       11324        3656          24        1215        2936
> Swap:     8175          60        8115

Does it lead to any performance regressions? Or it's only about increased
swap usage?

> 
> Fixes: 68600f623d69 ("mm: don't miss the last page because of round-off error")
> Cc: <stable@xxxxxxxxxxxxxxx> # v4.20+
> Signed-off-by: Gavin Shan <gshan@xxxxxxxxxx>
> ---
>  mm/vmscan.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c05eb9efec07..876370565455 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2415,10 +2415,13 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
>  			/*
>  			 * Scan types proportional to swappiness and
>  			 * their relative recent reclaim efficiency.
> -			 * Make sure we don't miss the last page
> -			 * because of a round-off error.
> +			 * Make sure we don't miss the last page on
> +			 * the offlined memory cgroups because of a
> +			 * round-off error.
>  			 */
> -			scan = DIV64_U64_ROUND_UP(scan * fraction[file],
> +			scan = mem_cgroup_online(memcg) ?
> +			       div64_u64(scan * fraction[file], denominator) :
> +			       DIV64_U64_ROUND_UP(scan * fraction[file],
>  						  denominator);

It looks a bit strange to round up for offline and basically down for
everything else. So maybe it's better to return to something like
the very first version of the patch:
https://www.spinics.net/lists/kernel/msg2883146.html ?
For memcg reclaim reasons we do care only about an edge case with few pages.

But overall it's not obvious to me, why rounding up is worse than rounding down.
Maybe we should average down but accumulate the reminder?
Creating an implicit bias for small memory cgroups sounds groundless.

Thank you!

Roman




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux