Re: [PATCH v3 2/2] lru: allow large batched add large folio to lru list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/20/23 11:22, Matthew Wilcox wrote:
> On Sat, Apr 29, 2023 at 04:27:59PM +0800, Yin Fengwei wrote:
>> diff --git a/mm/swap.c b/mm/swap.c
>> index 57cb01b042f6..0f8554aeb338 100644
>> --- a/mm/swap.c
>> +++ b/mm/swap.c
>> @@ -228,8 +228,7 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
>>  static void folio_batch_add_and_move(struct folio_batch *fbatch,
>>  		struct folio *folio, move_fn_t move_fn)
>>  {
>> -	if (folio_batch_add(fbatch, folio) && !folio_test_large(folio) &&
>> -	    !lru_cache_disabled())
>> +	if (folio_batch_add(fbatch, folio) && !lru_cache_disabled())
>>  		return;
>>  	folio_batch_move_lru(fbatch, move_fn);
>>  }
> 
> What if all you do is:
> 
> -	if (folio_batch_add(fbatch, folio) && !folio_test_large(folio) &&
> -	    !lru_cache_disabled())
> +	if (folio_batch_add(fbatch, folio) && !lru_cache_disabled())
> 
> 
> How does that perform?
With same hardware: IceLake 48C/96T and using order 2, the test result is as following:

order2_without_the_patch:
  -   65.53%     0.22%  page_fault1_pro  [kernel.kallsyms]           [k] folio_lruvec_lock_irqsave
     - 65.30% folio_lruvec_lock_irqsave
        + 65.30% _raw_spin_lock_irqsave

order2_with_the_patch:
  -   19.94%     0.26%  page_fault1_pro  [kernel.vmlinux]            [k] folio_lruvec_lock_irqsave
     - 19.67% folio_lruvec_lock_irqsave
        + 19.67% _raw_spin_lock_irqsave


Regards
Yin, Fengwei




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux