Re: [PATCH v2 09/10] mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/02/2024 22:15, David Hildenbrand wrote:
> It's a pain that we have to handle cond_resched() in
> tlb_batch_pages_flush() manually and cannot simply handle it in
> release_pages() -- release_pages() can be called from atomic context.
> Well, in a perfect world we wouldn't have to make our code more at all.
> 
> With page poisoning and init_on_free, we might now run into soft lockups
> when we free a lot of rather large folio fragments, because page freeing
> time then depends on the actual memory size we are freeing instead of on
> the number of folios that are involved.
> 
> In the absolute (unlikely) worst case, on arm64 with 64k we will be able
> to free up to 256 folio fragments that each span 512 MiB: zeroing out 128
> GiB does sound like it might take a while. But instead of ignoring this
> unlikely case, let's just handle it.
> 
> So, let's teach tlb_batch_pages_flush() that there are some
> configurations where page freeing is horribly slow, and let's reschedule
> more frequently -- similarly like we did for now before we had large folio
> fragments in there. Note that we might end up freeing only a single folio
> fragment at a time that might exceed the old 512 pages limit: but if we
> cannot even free a single MAX_ORDER page on a system without running into
> soft lockups, something else is already completely bogus.
> 
> In the future, we might want to detect if handling cond_resched() is
> required at all, and just not do any of that with full preemption enabled.
> 
> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
> ---
>  mm/mmu_gather.c | 50 ++++++++++++++++++++++++++++++++++++++++---------
>  1 file changed, 41 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index d175c0f1e2c8..2774044b5790 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -91,18 +91,19 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
>  }
>  #endif
>  
> -static void tlb_batch_pages_flush(struct mmu_gather *tlb)
> +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch)
>  {
> -	struct mmu_gather_batch *batch;
> -
> -	for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
> -		struct encoded_page **pages = batch->encoded_pages;
> +	struct encoded_page **pages = batch->encoded_pages;
> +	unsigned int nr, nr_pages;
>  
> +	/*
> +	 * We might end up freeing a lot of pages. Reschedule on a regular
> +	 * basis to avoid soft lockups in configurations without full
> +	 * preemption enabled. The magic number of 512 folios seems to work.
> +	 */
> +	if (!page_poisoning_enabled_static() && !want_init_on_free()) {

Is the performance win really worth 2 separate implementations keyed off this?
It seems a bit fragile, in case any other operations get added to free which are
proportional to size in future. Why not just always do the conservative version?

>  		while (batch->nr) {
> -			/*
> -			 * limit free batch count when PAGE_SIZE > 4K
> -			 */
> -			unsigned int nr = min(512U, batch->nr);
> +			nr = min(512, batch->nr);

If any entries are for more than 1 page, nr_pages will also be encoded in the
batch, so effectively this could be limiting to 256 actual folios (half of 512).
Is it worth checking for ENCODED_PAGE_BIT_NR_PAGES_NEXT and limiting accordingly?

nit: You're using 512 magic number in 2 places now; perhaps make a macro?

>  
>  			/*
>  			 * Make sure we cover page + nr_pages, and don't leave
> @@ -119,6 +120,37 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
>  			cond_resched();
>  		}
>  	}
> +
> +	/*
> +	 * With page poisoning and init_on_free, the time it takes to free
> +	 * memory grows proportionally with the actual memory size. Therefore,
> +	 * limit based on the actual memory size and not the number of involved
> +	 * folios.
> +	 */
> +	while (batch->nr) {
> +		for (nr = 0, nr_pages = 0;
> +		     nr < batch->nr && nr_pages < 512; nr++) {
> +			if (unlikely(encoded_page_flags(pages[nr]) &
> +				     ENCODED_PAGE_BIT_NR_PAGES_NEXT))
> +				nr_pages += encoded_nr_pages(pages[++nr]);
> +			else
> +				nr_pages++;
> +		}

I guess worst case here is freeing (511 + 8192) * 64K pages = ~544M. That's up
from the old limit of 512 * 64K = 32M, and 511 pages bigger than your statement
in the commit log. Are you comfortable with this? I guess the only alternative
is to start splitting a batch which would be really messy. I agree your approach
is preferable if 544M is acceptable.

> +
> +		free_pages_and_swap_cache(pages, nr);
> +		pages += nr;
> +		batch->nr -= nr;
> +
> +		cond_resched();
> +	}
> +}
> +
> +static void tlb_batch_pages_flush(struct mmu_gather *tlb)
> +{
> +	struct mmu_gather_batch *batch;
> +
> +	for (batch = &tlb->local; batch && batch->nr; batch = batch->next)
> +		__tlb_batch_free_encoded_pages(batch);
>  	tlb->active = &tlb->local;
>  }
>  





[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux