Re: [PATCH] mm: page_alloc: control latency caused by zone PCP draining

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 18 Mar 2024 21:07:36 +0100 Lucas Stach <l.stach@xxxxxxxxxxxxxx> wrote:

> When the complete PCP is drained a much larger number of pages
> than the usual batch size might be freed at once,

How much larger?  Please include the numbers here.

> causing large
> IRQ and preemption latency spikes, as they are all freed while
> holding the pcp and zone spinlocks.

How large are these spikes?

> To avoid those latency spikes, limit the number of pages freed
> in a single bulk operation to common batch limits.
> 

And how large are they after this?

> ---
>  mm/page_alloc.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a663202045dc..64a6f9823c8c 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2215,12 +2215,15 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
>   */
>  static void drain_pages_zone(unsigned int cpu, struct zone *zone)
>  {
> -	struct per_cpu_pages *pcp;
> +	struct per_cpu_pages *pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
> +	int count = READ_ONCE(pcp->count);
> +
> +	while (count) {
> +		int to_drain = min(count, pcp->batch << CONFIG_PCP_BATCH_SCALE_MAX);
> +		count -= to_drain;
>  
> -	pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
> -	if (pcp->count) {
>  		spin_lock(&pcp->lock);
> -		free_pcppages_bulk(zone, pcp->count, pcp, 0);
> +		free_pcppages_bulk(zone, to_drain, pcp, 0);
>  		spin_unlock(&pcp->lock);
>  	}

I'm not seeing what prevents two CPUs from trying to free the same
pages simultaneously?





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux