Re: high kswapd CPU usage with symmetrical swap in/out pattern with multi-gen LRU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi yu,

On 12/2/2023 5:22 AM, Yu Zhao wrote:
> Charan, does the fix previously attached seem acceptable to you? Any
> additional feedback? Thanks.

First, thanks for taking this patch to upstream.

A comment in code snippet is checking just 'high wmark' pages might
succeed here but can fail in the immediate kswapd sleep, see
prepare_kswapd_sleep(). This can show up into the increased
KSWAPD_HIGH_WMARK_HIT_QUICKLY, thus unnecessary kswapd run time.
@Jaroslav: Have you observed something like above?

So, in downstream, we have something like for zone_watermark_ok():
unsigned long size = wmark_pages(zone, mark) + MIN_LRU_BATCH << 2;

Hard to convince of this 'MIN_LRU_BATCH << 2' empirical value, may be we
should atleast use the 'MIN_LRU_BATCH' with the mentioned reasoning, is
what all I can say for this patch.

+	mark = sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING ?
+	       WMARK_PROMO : WMARK_HIGH;
+	for (i = 0; i <= sc->reclaim_idx; i++) {
+		struct zone *zone = lruvec_pgdat(lruvec)->node_zones + i;
+		unsigned long size = wmark_pages(zone, mark);
+
+		if (managed_zone(zone) &&
+		    !zone_watermark_ok(zone, sc->order, size, sc->reclaim_idx, 0))
+			return false;
+	}


Thanks,
Charan




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux